Ok, it’s possible I’ve misunderstood. To see if I have, clarify something for me, please:
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
Edit: You know what, while the above question is still interesting, having reread the thread, I actually see the issue now, and it’s simpler. This line:
The VNM axioms assume that everything can be reduced to a unitary “utility”. If this isn’t the case, then you have a problem.
is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state). It should be:
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
With the minor errata that ‘assume’ would best be replaced with ‘conclude’, ‘believe’ or ‘accept’ this revision seems accurate. For someone taking your position the most interesting thing about the VNM theory is that it prompts you to work out just which of the axioms you reject. One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
Entirely agree. Humans, for example, are not remotely VNM coherent.
This line … is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state).
I have retracted my criticism via edit. One misstatement does not unfamiliarity make so even prior to your revision I suspect my criticism was overstated. Pardon me.
Entirely agree. Humans, for example, are not remotely VNM coherent.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
Edit: You know what, while the above question is still interesting,
You’re right, that question does seem interesting. Let me see...
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
I only ever apply values to entire world histories[1]. ie. Consider the entire wavefunction of the universe, which includes all of space, all of time, all Everett branches[2] and so forth. Different possible configurations of that universe are preferred over others on a basis that is entirely arbitrary. It so happens that my preferences over world histories do depend somewhat on computations about how the state of certain other people’s brains at certain times compares to the rest of the configuration of that world history. This preference is not different in nature to the preferring histories which do not have lots of copies wedrifid tortured for billions of years.
It also applies whether or not the other people I have altruistic preferences about happen to have utility functions at all. That’d probably make the math easier and the preference-preferences easier to instantiate but it isn’t necessary. Mind you I don’t necessarily care about all components of what make up their ‘utility function’ equally. I could perhaps assign negative weight to or ignore certain aspects of it on the basis of what caused those preferences.
Translating how strongly I prefer one history over another into a utility function occurs by the normal mechanism (ie. “require ‘VNM’; wedrifid.preferences.to_utility_function”. The altruistic values issue is orthogonal to the having-a-utility-function issue.
Of course, in practice I rely on and discuss much simpler things but this is from the perspective of considering the simpler models to be approximations of and simplifications of world-history preferences.
Ignore the branches part if you don’t believe in those—the difference isn’t of direct importance to the immediate question even though it has tangential relevance to your overall position.
Ok, it’s possible I’ve misunderstood. To see if I have, clarify something for me, please:
How would you represent the valuing of another agent’s values, using the VNM theorem? That is, let’s say I assign utility to (certain) other people having lots of utility. How would this be represented?
Edit: You know what, while the above question is still interesting, having reread the thread, I actually see the issue now, and it’s simpler. This line:
is indeed a misstatement (as it stands it is indeed incorrect for the reasons you state). It should be:
“Accepting the VNM axioms requires you to assume that everything can be reduced to a unitary “utility”.” (Which is to say, if you accept the axioms, you will be forced to conclude this; and also, assuming this leads you to the VNM axioms.)
If you find that reducing everything to a unitary utility then fails to describe your preferences over outcomes, you have a problem.
With the minor errata that ‘assume’ would best be replaced with ‘conclude’, ‘believe’ or ‘accept’ this revision seems accurate. For someone taking your position the most interesting thing about the VNM theory is that it prompts you to work out just which of the axioms you reject. One man’s modus ponens is another man’s modus tollens. The theory doesn’t care whether it is being used to conclude acceptance of the conclusion or rejection of one or more of the axioms.
Entirely agree. Humans, for example, are not remotely VNM coherent.
I have retracted my criticism via edit. One misstatement does not unfamiliarity make so even prior to your revision I suspect my criticism was overstated. Pardon me.
Thank you, and no offense taken.
Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
Indeed. Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that; I have to get my copy of Rational Choice in an Uncertain World out of storage (as I recall said book explains the implications of the VNM axions quite well and I distinctly recall that my objections to VNM arose when reading it).
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
You’re right, that question does seem interesting. Let me see...
I only ever apply values to entire world histories[1]. ie. Consider the entire wavefunction of the universe, which includes all of space, all of time, all Everett branches[2] and so forth. Different possible configurations of that universe are preferred over others on a basis that is entirely arbitrary. It so happens that my preferences over world histories do depend somewhat on computations about how the state of certain other people’s brains at certain times compares to the rest of the configuration of that world history. This preference is not different in nature to the preferring histories which do not have lots of copies wedrifid tortured for billions of years.
It also applies whether or not the other people I have altruistic preferences about happen to have utility functions at all. That’d probably make the math easier and the preference-preferences easier to instantiate but it isn’t necessary. Mind you I don’t necessarily care about all components of what make up their ‘utility function’ equally. I could perhaps assign negative weight to or ignore certain aspects of it on the basis of what caused those preferences.
Translating how strongly I prefer one history over another into a utility function occurs by the normal mechanism (ie. “require ‘VNM’; wedrifid.preferences.to_utility_function”. The altruistic values issue is orthogonal to the having-a-utility-function issue.
Of course, in practice I rely on and discuss much simpler things but this is from the perspective of considering the simpler models to be approximations of and simplifications of world-history preferences.
Ignore the branches part if you don’t believe in those—the difference isn’t of direct importance to the immediate question even though it has tangential relevance to your overall position.