I think vNM is a bit strange, because it is both too lax and too restrictive.
It’s too lax because it allows an agent to have basically any preference whatsoever, as long as the preference is linear in probability. To unpack: The intrinsic preferences of an agent need not be monotonic in the amount of massenergy/spacetime it controls, need not be computable or continuous or differentiable… they just need to conform to “(2n)% chance of X is twice as good (or bad) as n% chance of X”. Also, the twitching robot is vNM-rational for some utility function, but arguably a degenerate one.
But that constraint is also restrictive: vNM requires you to be risk-neutral. Risk aversion violates preferences being linear in probability, and being vNM probably causes St. Petersburging all around the place. Many people desperately want risk aversion, but that’s not the vNM way.
I don’t know much about alternatives to vNM, but I hope people work on them—seems worth it—I know some people are thinking about it geometrically in terms of the shape of equivalence classes through the probability simplex (where vNM deals with hyperplanes, but other shapes are possible).
Okay, but is avoiding St. Petersburging risk-aversive or loss-aversive? In my impression, many similar cases just contain equivalent of “you just die” (for example, you lose all your money) which is very low utility, so you can sort-of recover avoiding St. Petersburg via setting utility in log of size of bankroll, or something like that.
Good point! St. Petersburg requires utility being monotonic (ideally linear) in something other than probability (and optionally something like unbounded or at least increasing for a while).
This doesn’t have to be the case for all utility functions. (Especially since unbounded utilities are bad). Probabilities are strictly bounded, so having utility being linear in them is not a huge problem. Thanks for changing my mind!
Yep, but my honest position towards St. Petersburg lotteries is that they do not exist in “natural units”, i.e., counts of objects in physical world.
Reasoning: if you predict with probability p that you encounter St. Petersburg lottery which creates infinite number of happy people on expectation (version of St. Petersburg lottery for total utilitarians), then you should put expectation of number of happy people to infinity now, because E[number of happy people] = p * E[number of happy people due to St. Petersburg lottery] + (1 - p) * E[number of happy people for all other reasons] = p * inf + (1 - p) * E[number of happy people for all other reasons] = inf.
Therefore, if you don’t think right now that expected number of future happy people is infinity, then you shouldn’t expect St. Petersburg lottery to happen in any point of the future.
Therefore, you should set your utility either in “natural units” or in some “nice” function of “natural units”.
I agree with your claim that VNM is in some ways too lax.
vNM is .. too restrictive … [because] vNM requires you to be risk-neutral. Risk aversion violates preferences being linear in probability … Many people desperately want risk aversion, but that’s not the vNM way.
Do many people desperately want to be risk averse about the probability a given outcome will be achieved? I agree many people want to be loss averse about e.g. how many dollars they will have. Scott Garrabrant provides an example in which a couple wishes to be fair to its members via compensating for other scenarios in which things would’ve been done the husband’s way (even though those scenarios did not Scott’s example is … sort of an example of risk aversion about probabilities? I’d be interested in other examples if you have them.
I think vNM is a bit strange, because it is both too lax and too restrictive.
It’s too lax because it allows an agent to have basically any preference whatsoever, as long as the preference is linear in probability. To unpack: The intrinsic preferences of an agent need not be monotonic in the amount of massenergy/spacetime it controls, need not be computable or continuous or differentiable… they just need to conform to “(2n)% chance of X is twice as good (or bad) as n% chance of X”. Also, the twitching robot is vNM-rational for some utility function, but arguably a degenerate one.
But that constraint is also restrictive: vNM requires you to be risk-neutral. Risk aversion violates preferences being linear in probability, and being vNM probably causes St. Petersburging all around the place. Many people desperately want risk aversion, but that’s not the vNM way.
I don’t know much about alternatives to vNM, but I hope people work on them—seems worth it—I know some people are thinking about it geometrically in terms of the shape of equivalence classes through the probability simplex (where vNM deals with hyperplanes, but other shapes are possible).
Okay, but is avoiding St. Petersburging risk-aversive or loss-aversive? In my impression, many similar cases just contain equivalent of “you just die” (for example, you lose all your money) which is very low utility, so you can sort-of recover avoiding St. Petersburg via setting utility in log of size of bankroll, or something like that.
Good point! St. Petersburg requires utility being monotonic (ideally linear) in something other than probability (and optionally something like unbounded or at least increasing for a while).
This doesn’t have to be the case for all utility functions. (Especially since unbounded utilities are bad). Probabilities are strictly bounded, so having utility being linear in them is not a huge problem. Thanks for changing my mind!
My general reasoning about unbounded utilities see here
This doesn’t work if the lottery is in utils rather than dollars/money/whatever instrumental resource.
Yep, but my honest position towards St. Petersburg lotteries is that they do not exist in “natural units”, i.e., counts of objects in physical world.
Reasoning: if you predict with probability p that you encounter St. Petersburg lottery which creates infinite number of happy people on expectation (version of St. Petersburg lottery for total utilitarians), then you should put expectation of number of happy people to infinity now, because E[number of happy people] = p * E[number of happy people due to St. Petersburg lottery] + (1 - p) * E[number of happy people for all other reasons] = p * inf + (1 - p) * E[number of happy people for all other reasons] = inf.
Therefore, if you don’t think right now that expected number of future happy people is infinity, then you shouldn’t expect St. Petersburg lottery to happen in any point of the future.
Therefore, you should set your utility either in “natural units” or in some “nice” function of “natural units”.
I agree with your claim that VNM is in some ways too lax.
Do many people desperately want to be risk averse about the probability a given outcome will be achieved? I agree many people want to be loss averse about e.g. how many dollars they will have. Scott Garrabrant provides an example in which a couple wishes to be fair to its members via compensating for other scenarios in which things would’ve been done the husband’s way (even though those scenarios did not
Scott’s example is … sort of an example of risk aversion about probabilities? I’d be interested in other examples if you have them.