>Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
Couldn’t one use a lexicographic utility function that has infinitely many levels? I don’t know exactly how this works out technically. I know that maximizing the expectation of a lexicographic utility function is equivalent to the vNM axioms without continuity, see Blume et al. (1989). But they only mention the case of infinitely many levels in passing.
I’m not sure what sort of decision procedure you would use that actually has outputs, if you assign ever-tinier probabilities to theories ever-higher in the lexicographic ordering.
Like, infinite levels going down is no problem, but going up seems like you need all but a finite number of levels to be indifferent to your actions before you can make a decision—but maybe I just don’t see a trick.
>Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
Couldn’t one use a lexicographic utility function that has infinitely many levels? I don’t know exactly how this works out technically. I know that maximizing the expectation of a lexicographic utility function is equivalent to the vNM axioms without continuity, see Blume et al. (1989). But they only mention the case of infinitely many levels in passing.
I’m not sure what sort of decision procedure you would use that actually has outputs, if you assign ever-tinier probabilities to theories ever-higher in the lexicographic ordering.
Like, infinite levels going down is no problem, but going up seems like you need all but a finite number of levels to be indifferent to your actions before you can make a decision—but maybe I just don’t see a trick.