Saturating utilities as a model
Okay, it is a very raw idea, but consider the utility processing that works as following:
1: The utility i’m speaking of is not ‘happiness’, nor is it ‘strength of the compulsion’, the utility is only used for the purpose of comparing between futures to pick the one with larger utility. Applying same monotonously increasing function to both sides of comparison does not change outcome of comparison, and works as if the function was not there.
The utility is an array of n numbers. The arrays are compared after pseudo-summing them using sigmoid function like:
a[1]+k[1]*sigmoid(a[2]+k[2]*sigmoid(a[3] + …))
This has a bunch of nasty properties (i.e. it is not clear how to deal with probabilities here), but may capture the human view on the torture and dust specks, and similar problems like pascal’s wager, where arguments of low quality may just go into a[n] where n is large, rather than be assigned any defined low probability.
Note that usually, two future worlds being compared are identical up to some n , and so the comparison can be made starting from the n, disregarding the equal smaller terms.
Furthermore, the comparison allows for ‘short evaluation’, as after few steps no further values need to be considered.
The obvious model that comes to mind if you observe this comparator as a black box, is the linear sum where weights are k[1] >> k[2] , k[2] >> k[3] , and so on, which is a fairly good approximation but breaks down when you start using really huge numbers like 3^^^^3 . The sigmoid eats uparrows for breakfast and asks for more.
It seems to me that this does accurately capture the behaviour which is not generally very impressed by Knuth’s up arrow notation, and the sigmoids are biologically plausible. Other monotonously growing functions can be employed.
One could probably come up with nicer model which results in identical outcomes, whereby n does not need to be integer.
This seems similar to lexicographic preferences. Also search for hyperreal or surreal utility functions on LW, there have been several discussions.
Yep, that was the inspiration—a ‘soft’ lexicographic preference. We very often use lexicographic preferences in software because they have neat property—you can always insert a value between other two. It is also easy to discard model of lexicographic preferences due to unnatural ‘strictness’ of the preference.