Lexicographical preference orderings seem to come naturally to humans. Sentiments like “no amount of money is worth one human life” are commonly expressed.
The other problem comes from using probability and expected utility, which makes anything lexicographically second completely worthless in all realistic cases. It’s one thing to say that you prefer apples to pears lexicographically when there are ten of each lying around and everything is deterministic (just take the ten apples first then the ten pears afterwards). But does it make sense to say that you’d prefer one chance in a trillion of extending someone’s life by a microsecond, over a billion euros of free consumption?
So this short post will propose a more sensible, smoothed version of lexicographical ordering, suitable to capture the basic intuition, but usable with expected utility.
If the utility U has lexicographical priority and V is subordinate to it, then choose a value a and maximise:
W=U+a2tanh(V).
In that case, increases in expected V always cause non-trivial increases in expected W, but an increase in U of a will always be more important than any possible increase in V.
This seems related to scope insensitivity and availability bias. No amount of money (that I have direct control of) is worth one human life ( in my Dunbar group). No money (which my mind exemplifies as $100k or whatever) is worth the life of my example human, a coworker. Even then, its false, but it’s understandable.
More importantly, categorizations of resources (and of people, probably) are map, not territory. The only rational preference ranking is over reachable states of the universe. Or, if you lean a bit far towards skepticism/solopcism, over sums of future experiences.
Oh, wait. I’ve been treating preferences as territory, though always expressed in map terms (because communication and conscious analysis is map-only). I’ll have to think about what it would mean if they were purely map artifacts.
Lexicographical preference orderings seem to come naturally to humans. Sentiments like “no amount of money is worth one human life” are commonly expressed.
Now, that particular sentiment is wrong because money can be used to purchase human lives.
The other problem comes from using probability and expected utility, which makes anything lexicographically second completely worthless in all realistic cases. It’s one thing to say that you prefer apples to pears lexicographically when there are ten of each lying around and everything is deterministic (just take the ten apples first then the ten pears afterwards). But does it make sense to say that you’d prefer one chance in a trillion of extending someone’s life by a microsecond, over a billion euros of free consumption?
So this short post will propose a more sensible, smoothed version of lexicographical ordering, suitable to capture the basic intuition, but usable with expected utility.
If the utility U has lexicographical priority and V is subordinate to it, then choose a value a and maximise:
W=U+a2tanh(V).
In that case, increases in expected V always cause non-trivial increases in expected W, but an increase in U of a will always be more important than any possible increase in V.
This seems related to scope insensitivity and availability bias. No amount of money (that I have direct control of) is worth one human life ( in my Dunbar group). No money (which my mind exemplifies as $100k or whatever) is worth the life of my example human, a coworker. Even then, its false, but it’s understandable.
More importantly, categorizations of resources (and of people, probably) are map, not territory. The only rational preference ranking is over reachable states of the universe. Or, if you lean a bit far towards skepticism/solopcism, over sums of future experiences.
Preferences exist in the map, in human brains, and we want to port them to the territory with the minimum of distortion.
Oh, wait. I’ve been treating preferences as territory, though always expressed in map terms (because communication and conscious analysis is map-only). I’ll have to think about what it would mean if they were purely map artifacts.