I think this post (Evolution and irrationality) is interesting but don’t know what to make of it due to a lack of general expertise:
Sozou’s idea is that uncertainty as to the nature of any underlying hazards can explain time inconsistent preferences. Suppose there is a hazard that may prevent the pay-off from being realised. This would provide a basis (beyond impatience) for discounting a pay-off in the future. But suppose further that you do not know what the specific probability of that hazard being realised is (although you know the probability distribution). What is the proper discount rate?
Sozou shows that as time passes, one can update their estimate of the probability of the underlying hazard. If after a week the hazard has not occurred, this would suggest that the probability of the hazard is not very high, which would allow the person to reduce the rate at which they discount the pay-off. When offered with a choice of one or two bottles of wine 30 or 31 days into the future, the person applies a lower discount rate in their mind than for the short period because they know that as each day passes in which there has been no hazard preventing the pay-off, their estimate of the hazard’s probability will drop.
this was my initial reaction to the OP, stated more rigorously. Our risk assessment seems to be hardwired into several of our heuristics. Those risk assessments are no longer appropriate because our environment has become much less dangerous.
It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions.
In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty.
At least I think thats possible. Have there been any formal analyses of this idea?
I think this post (Evolution and irrationality) is interesting but don’t know what to make of it due to a lack of general expertise:
Comments would be appreciated.
this was my initial reaction to the OP, stated more rigorously. Our risk assessment seems to be hardwired into several of our heuristics. Those risk assessments are no longer appropriate because our environment has become much less dangerous.
It seems to me there that utility functions are not only equivalent up to affine transformations. Both utility functions and subjective probability distributions seem to take some relevant real world factor into account. And it seems you can move these representations between your utility function and your probability distribution while still giving exactly the same choice over all possible decisions.
In the case of discounting, you could for example represent uncertainty in a time-discounted utility function, or you do it with your probability distribution. You could even throw away your probability distribution and have your utility function take into account all subjective uncertainty.
At least I think thats possible. Have there been any formal analyses of this idea?
There’s this post by Vladimir Nesov.