Continuity axiom of vNM
In a previous post, I left a somewhat cryptic comment on the continuity/Archimedean axiom of vNM expected utility.
(Continuity/Achimedean) This axiom (and acceptable weaker versions of it) is much more subtle that it seems; “No choice is infinity important” is what it seems to say, but ” ‘I could have been a contender’ isn’t good enough” is closer to what it does. Anyway, that’s a discussion for another time.
Here I’ll explain briefly what I mean by it. Let’s drop that axiom, and see what could happen. First of all, we could have a utility function with non-standard real value. This allows some things to be infinitely more important than others. A simple illustration is lexicographical ordering; eg my utility function consists of the amount of euros I end up owning, with the amount of sex I get serving as a tie-breaker.
There is nothing wrong with such a function! First, because in practice it functions as a standard utility function (I’m unlikely to be able to indulge in sex in a way that has absolutely no costs or opportunity costs, so the amount of euros will always predominate). Secondly because, even if it does make a difference… it’s still expected utility maximisation, just a non-standard version.
But worse things can happen if you drop the axiom. Consider this decision criteria: I will act so that, at some point, there will have been a chance of me becoming heavy-weight champion of the world. This is compatible with all the other vNM axioms, but is obviously not what we want as a decision criteria. In the real world, such decision criteria is vacuous (there is a non-zero chance of me becoming heavyweight champion of the world right now), but it certainly could apply in many toy models.
That’s why I said that the continuity axiom is protecting us from “I could have been a contender (and that’s all that matters)” type reasoning, not so much from “some things are infinitely important (compared to others)”.
Also notice that the quantum many-worlds version of the above decision criteria—“I will act so that the measure of type X universe is non-zero”—does not sound quite as stupid, especially if you bring in anthropics.
- 7 Nov 2014 22:12 UTC; 16 points) 's comment on MIRI Research Guide by (
- 4 Jan 2018 20:10 UTC; 1 point) 's comment on A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems by (
I don’t follow. Could you make this example more formal, giving a set of outcomes, a set of lotteries over these outcomes, and a preference relation on these that corresponds to “I will act so that, at some point, there will have been a chance of me becoming a heavy-weight champion of the world”, and which fails Continuity but satisfies all other VNM axioms? (Intuitively this sounds more like it’s violating Independence, but I may well be misunderstanding what you’re trying to do since I don’t know how to do the above formalization of your argument.)
Take a reasoner who can make pre-commitments (or a UDT/TDT type). This reasoner, in effect, only has to make a single decision for all time.
Let A, B, C… be pure outcomes, a, b, c,… be lotteries. Then define the following pseudo-utility function f:
f(a) = 1 if the outcome A appears with non-zero probability in a, f(a) = 0 otherwise. The decision maker will use f to rank options.
This clearly satisfies completeness and transitivity (because it uses a numerical scale). And then… It gets tricky. I’ve seen independence written both in a < form and a ⇐ form (see http://en.wikipedia.org/wiki/Expected_utility_hypothesis vs http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem ). I have a strong hunch that the two versions are equivalent, given the other axioms.
Anyway, the above decision process satisfies ⇐ independence (but not < independence).
To see that the decision process satisfies ⇐ independence, note that f(pa+(1-p)b)=max(f(a),f(b)). So if f(a) ⇐ f(b), then f(pa+(1-p)c)=max(f(a),f(c)) ⇐ max(f(b),f(c)) = f(pb+(1-p)c)).
Yes, the two versions of independence are equivalent given the other axioms. If you don’t have continuity to make them equivalent, I think the natural thing to do is to ask for both types of independence.
(The intuition is: independence feels like it should demand all of these things. Normally it’s not stated like that because it’s clunky to add extra statements when one is enough.)
So if we ditch continuity, do the two independence axioms ensure we have utility functions (possibly non-standard ones)?
I find it quite plausible that would ensure this (60-80% credence?), but it’s not obvious. In particular the way you normally prove that there’s a utility function is that you construct it, and you use the continuity axiom to do this.
Without the continuity axiom, maybe you can prove some representation with something satisfying the axioms for the reals … but it looks hard.
At the very minimum, you need to have a distributivity law for lotteries. If 50%(50%A+50%B)+50%(50%A+50%C) is not defined to be the same thing as 50%A+25%B+25%C, then it’s easy to find counter-examples...
Yes, I think in the classical conception of lotteries these are regarded as the same. You could reject that, but it seems like it would be similar to how some people think that (A, when B was on the table) is a different outcome from (A, when B was not on the table), and so may be assigned a different utility.
I’ll try building counter-examples first, then...
It seems that you do have that result, see here: http://link.springer.com/article/10.1007%2FBF01766393
However this seems to require a strengthening of the independence axiom, so that the implication goes in the opposite direction in some cases (see axiom 6, page 71).
incidentally, < independence isn’t enough on its own either.
Pick a standard expected utility situation, with A<B for two pure outcomes A and B. Then arbitrarily set A=B.
< Independence is of the type “if smeu then blah”. A=B is not a smeu, and will never appear as a blah.
EDIT: this violates transitivity, alas.
I am interested in this topic, but do not think this post was clearly written.
I agree.
Something seems upside-down here. It sounds like you are arguing for an axiom based on the fact that the axiom suffices to rule out certain forms of stupidity. But I don’t think a principle should be considered “an axiom of rational choice” unless conformity to it is necessary to rule out certain forms of stupidity.