According to orthodox expected utility theory, the boundedness of the utility function follows from standard decision-theoretic assumptions, like Savage’s fairly weak axioms
I notice that Savage’s axioms require you to have consistent preferences over an unreasonably broad set of actions, namely any state-outcome relationship that could mathematically exist, even if it is completely and extremely physically impossible.
I think that’s an extremely strong decision-theoretic assumption.
Fair. I’ve stricken out the “fairly weak”. I think this is true of the vNM axioms, too. Still, “completely and extremely physically impossible” to me just usually means very very low probability, not probability 0. We could be wrong about physics. See also Cromwell’s rule. So, if you want your theory to cover all extremely unlikely but not actually totally ruled out (probability 0), it really needs to cover a lot. There may some things you can reasonably assign probability 0 to (other than individual events drawn from a continuum, say) or some probability assignments that you aren’t forced to consider (they are your subjective probabilities after all), so Savage’s axioms could be stronger than necessary.
I don’t think it’s reasonable to rule out all possible realizations of Christiano’s St. Petersburg lotteries, though. You could still ignore these possibilities, and I think this is basically okay, too, but it seems hard to come up with a satisfactory principled reason to do so, so I’d guess it’s incompatible with normative realism about decision theory (which I doubt, anyway).
One notion that deconfused these sorts of incredibly low probabilities to me is to just do a case split.
Suppose we have a cup of coffee. Probably if you drink it, nothing much happens. But by Cromwell it is conceivable that it was actually planted by an Eldritch trickster god and that if you drink it the Eldritch trickster god will torture 3^^^^^3 people for 100 years.
Now obviously the trickster god scenario is very unlikely, I’d say much less than 1e-1000000 probability. (IMO think we should have at least as many zeros as I used of characters to describe the scenario, but that would be unweildy.) Though for the purpose of this thought experiment, let’s round it to 1e-1000000.
Would it be bad to drink the coffee? Well, if we have linear unbounded utility, we can do the expected utility calculation and get 1e-1000000 * 3^^^^^3 = too big to be even close to acceptable.
But this gives you the expected badness. In reality, either we are in the trickster god scenario, or we are not. If we are not in the trickster god scenario (or any scenario like it), then it’s fine to drink the coffee. If we are in the scenario, then it’s incredibly bad to drink it.
So there’s a small probability that we’d be making a terrible mistake in drinking it, and a large probability that we would be making a minor mistake in not drinking it. Though the trickster god belief probably leads to a bunch of other correlated behaviors that in total would be a big mistake.
So, reordering your life entirely in the service of a utility with probability << 1e-1000000 is probably bad, but it might be good with probability << 1e-1000000, and if you accept unbounded utilities, then that might make it worth it.
I notice that Savage’s axioms require you to have consistent preferences over an unreasonably broad set of actions, namely any state-outcome relationship that could mathematically exist, even if it is completely and extremely physically impossible.
I think that’s an extremely strong decision-theoretic assumption.
Fair. I’ve stricken out the “fairly weak”. I think this is true of the vNM axioms, too. Still, “completely and extremely physically impossible” to me just usually means very very low probability, not probability 0. We could be wrong about physics. See also Cromwell’s rule. So, if you want your theory to cover all extremely unlikely but not actually totally ruled out (probability 0), it really needs to cover a lot. There may some things you can reasonably assign probability 0 to (other than individual events drawn from a continuum, say) or some probability assignments that you aren’t forced to consider (they are your subjective probabilities after all), so Savage’s axioms could be stronger than necessary.
I don’t think it’s reasonable to rule out all possible realizations of Christiano’s St. Petersburg lotteries, though. You could still ignore these possibilities, and I think this is basically okay, too, but it seems hard to come up with a satisfactory principled reason to do so, so I’d guess it’s incompatible with normative realism about decision theory (which I doubt, anyway).
One notion that deconfused these sorts of incredibly low probabilities to me is to just do a case split.
Suppose we have a cup of coffee. Probably if you drink it, nothing much happens. But by Cromwell it is conceivable that it was actually planted by an Eldritch trickster god and that if you drink it the Eldritch trickster god will torture 3^^^^^3 people for 100 years.
Now obviously the trickster god scenario is very unlikely, I’d say much less than 1e-1000000 probability. (IMO think we should have at least as many zeros as I used of characters to describe the scenario, but that would be unweildy.) Though for the purpose of this thought experiment, let’s round it to 1e-1000000.
Would it be bad to drink the coffee? Well, if we have linear unbounded utility, we can do the expected utility calculation and get 1e-1000000 * 3^^^^^3 = too big to be even close to acceptable.
But this gives you the expected badness. In reality, either we are in the trickster god scenario, or we are not. If we are not in the trickster god scenario (or any scenario like it), then it’s fine to drink the coffee. If we are in the scenario, then it’s incredibly bad to drink it.
So there’s a small probability that we’d be making a terrible mistake in drinking it, and a large probability that we would be making a minor mistake in not drinking it. Though the trickster god belief probably leads to a bunch of other correlated behaviors that in total would be a big mistake.
So, reordering your life entirely in the service of a utility with probability << 1e-1000000 is probably bad, but it might be good with probability << 1e-1000000, and if you accept unbounded utilities, then that might make it worth it.