The problem is deciding how to order certain outcomes in the first place, even in deterministic cases. You can declare orderings some way, but they will probably end up either violating transitivity or at least be pretty counterintuitive to you when combined.
Also, infinities usually end up requiring the rejection of the continuity/Archimedean axiom, which the vNM theorem uses to get finite numbers to represent utilities. If you want to force it to hold, I think you’ll need to reject scope sensitivity.
Yup, I agree. Basically once you say you’re going to use aggregated utilities (by which I don’t just mean summed, I mean according to some potentially-complicated aggregation function) to make VNM-style decisions, this requires you to abandon a lot of other plausible desiderata.
Like scope sensitivity. Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
And this is pretty weird. But at some point you just learn to let desiderata go when they’re bad. One experience I remember from college is learning that there’s no uniform distribution over the real numbers. In some circumstances you can “fight reality” and use an improper prior as a bit of mathematical sleight of hand. But for all cases where you’re going to actually use the answer, you just have to accept that infinity is too big to care about all of it equally.
>Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
Couldn’t one use a lexicographic utility function that has infinitely many levels? I don’t know exactly how this works out technically. I know that maximizing the expectation of a lexicographic utility function is equivalent to the vNM axioms without continuity, see Blume et al. (1989). But they only mention the case of infinitely many levels in passing.
I’m not sure what sort of decision procedure you would use that actually has outputs, if you assign ever-tinier probabilities to theories ever-higher in the lexicographic ordering.
Like, infinite levels going down is no problem, but going up seems like you need all but a finite number of levels to be indifferent to your actions before you can make a decision—but maybe I just don’t see a trick.
The problem is deciding how to order certain outcomes in the first place, even in deterministic cases. You can declare orderings some way, but they will probably end up either violating transitivity or at least be pretty counterintuitive to you when combined.
Also, infinities usually end up requiring the rejection of the continuity/Archimedean axiom, which the vNM theorem uses to get finite numbers to represent utilities. If you want to force it to hold, I think you’ll need to reject scope sensitivity.
Yup, I agree. Basically once you say you’re going to use aggregated utilities (by which I don’t just mean summed, I mean according to some potentially-complicated aggregation function) to make VNM-style decisions, this requires you to abandon a lot of other plausible desiderata.
Like scope sensitivity. Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
And this is pretty weird. But at some point you just learn to let desiderata go when they’re bad. One experience I remember from college is learning that there’s no uniform distribution over the real numbers. In some circumstances you can “fight reality” and use an improper prior as a bit of mathematical sleight of hand. But for all cases where you’re going to actually use the answer, you just have to accept that infinity is too big to care about all of it equally.
>Because there’s “always a bigger infinity” no matter which you choose, any aggregation function you can use to make decisions is going to have to saturate at some infinite cardinality, beyond which it just gives some constant answer.
Couldn’t one use a lexicographic utility function that has infinitely many levels? I don’t know exactly how this works out technically. I know that maximizing the expectation of a lexicographic utility function is equivalent to the vNM axioms without continuity, see Blume et al. (1989). But they only mention the case of infinitely many levels in passing.
I’m not sure what sort of decision procedure you would use that actually has outputs, if you assign ever-tinier probabilities to theories ever-higher in the lexicographic ordering.
Like, infinite levels going down is no problem, but going up seems like you need all but a finite number of levels to be indifferent to your actions before you can make a decision—but maybe I just don’t see a trick.