I think that weak outcome-lottery dominance is inconsistent with transitivity + unbounded utilities in both directions (or unbounded utilities in one direction + the sure thing principle), rather than merely producing strange results. Though we could summarize “violates weak outcome-lottery dominance” as a strange result.
Violating weak outcome-lottery dominance means that a mix of gambles, each strictly better than a particular outcome X, can fail to be at least as good as X. If you give up on this property, or on transitivity, then even if you are assigning numbers you call “utilities” to actions I don’t think it’s reasonable to call them utilities in the decision-theoretic sense, and I’m comfortable saying that your procedure should no longer be described as “expected utility maximization.”
So I’d conclude that there simply don’t exist any preferences represented by unbounded utility functions (over the space of all lotteries), and that there is no patch to the notion of utility maximization that fixes this problem without giving up on some defining feature of EU maximization.
There may nevertheless be theories that are well-described as maximizing an unbounded utility function in some more limited situations. And there may well be preferences over a domain other than lotteries which are described intuitively by an unbounded utility function. (Though note that if you are only considering lotteries over a finite space then your utility function is necessarily bounded.) And although it seems somewhat less likely it could also be that in retrospect I will feel I was wrong about the defining features of EU maximization, and mixing together positive lotteries to get a negative lottery is actually consistent with its spirit.
I think it’s also worth observing that although St Petersburg cases are famously paradox-riddled, these cases seem overwhelmingly important on a conventional utilitarian view even before we consider any exotic hypotheses. Indeed, I personally became unhappy with unbounded utilities not because of impossibility results but because I tried to answer questions like “How valuable is it to accelerate technological progress?” or “How bad is it if unaligned AI takes over the world?” and immediately found that EU maximization with anything like “utility linear in population size” seemed to be unworkable in practice. I could find no sort of common-sensical regularization that let me get coherent answers out of these theories, and I’m not sure what it would look like in practice to try to use them to guide our actions.
I think that weak outcome-lottery dominance is inconsistent with transitivity + unbounded utilities in both directions (or unbounded utilities in one direction + the sure thing principle), rather than merely producing strange results. Though we could summarize “violates weak outcome-lottery dominance” as a strange result.
Violating weak outcome-lottery dominance means that a mix of gambles, each strictly better than a particular outcome X, can fail to be at least as good as X. If you give up on this property, or on transitivity, then even if you are assigning numbers you call “utilities” to actions I don’t think it’s reasonable to call them utilities in the decision-theoretic sense, and I’m comfortable saying that your procedure should no longer be described as “expected utility maximization.”
So I’d conclude that there simply don’t exist any preferences represented by unbounded utility functions (over the space of all lotteries), and that there is no patch to the notion of utility maximization that fixes this problem without giving up on some defining feature of EU maximization.
There may nevertheless be theories that are well-described as maximizing an unbounded utility function in some more limited situations. And there may well be preferences over a domain other than lotteries which are described intuitively by an unbounded utility function. (Though note that if you are only considering lotteries over a finite space then your utility function is necessarily bounded.) And although it seems somewhat less likely it could also be that in retrospect I will feel I was wrong about the defining features of EU maximization, and mixing together positive lotteries to get a negative lottery is actually consistent with its spirit.
I think it’s also worth observing that although St Petersburg cases are famously paradox-riddled, these cases seem overwhelmingly important on a conventional utilitarian view even before we consider any exotic hypotheses. Indeed, I personally became unhappy with unbounded utilities not because of impossibility results but because I tried to answer questions like “How valuable is it to accelerate technological progress?” or “How bad is it if unaligned AI takes over the world?” and immediately found that EU maximization with anything like “utility linear in population size” seemed to be unworkable in practice. I could find no sort of common-sensical regularization that let me get coherent answers out of these theories, and I’m not sure what it would look like in practice to try to use them to guide our actions.