However, either the axioms themselves (e.g. the continuity/Archimedean axiom, or general versions of Independence or the Sure-Thing Principle) rule out expectational total utilitarianism, or the kinds of arguments used to defend the axioms (Russell and Isaacs, 2021).
I don’t understand this part of your argument. Can you explain how you imagine this proof working?
Otherwise, it seems like most of your arguments come down to showing that lots of paradoxes happen when you do math to infinite ethics.
There are many arguments on LessWrong for, and againstinfiniteethics. I don’t think any, including this one, actually show “utilitarianism is irrational or self-undermining”. For example, as you came close to saying in your responses, you could just have bounded utility functions! That ends up being rational, and seems not self-undermining because after looking at many of these arguments it seems like maybe you’re kinda forced to.
I think there’s also some work on using hyper-reals or other generalizations to quantify infinities, and solving various problems that way.
Overall, I wish you’d explain the arguments in the papers you linked better. The one argument you actually wrote in this post was interesting, you should have done more of that!
I don’t understand this part of your argument. Can you explain how you imagine this proof working?
St Petersburg-like prospects (finite actual utility for each possible outcome, but infinite expected utility, or generalizations of them) violate extensions of each of these axioms to countably many possible outcomes:
The continuity/Archimedean axiom: if A and B have finite expected utility, and A < B, there’s no strict mixture of A and an infinite expected utility St Petersburg prospect, like pA+(1−p)StPetersburg, 0<p<1, that’s equivalent to B, because all such strict mixtures will have infinite expected utility. Now, you might not have defined expected utility yet, but this kind of argument would generalize: you can pick A and B to be outcomes of the St Petersburg prospect, and any strict mixture with A will be better than B.
The Independence axiom: see the following footnote.[2]
The Sure-Thing Principle: in the money pump argument in my post, B-$100 is strictly better than each outcome of A, but A is strictly better than B-$100. EDIT: Actually, you can just compare A with B.
I think these axioms are usually stated only for prospects for finitely many possible outcomes, but the arguments for the finitary versions, like specific money pump arguments, would apply equally (possibly with tiny modifications that wouldn’t undermine them) to the countable versions. Or, at least, that’s the claim of Russell and Isaacs, 2021, which they illustrate with a few arguments and briefly describe some others that would generalize. I reproduced their money pump argument in the post.
For example, as you came close to saying in your responses, you could just have bounded utility functions! That ends up being rational, and seems not self-undermining because after looking at many of these arguments it seems like maybe you’re kinda forced to.
Ya, I agree that would be rational. I don’t think having a bounded utility function is in itself self-undermining (and I don’t say so), but it would undermine utilitarianism, because it wouldn’t satisfy Impartiality + (Separability or Goodsell, 2021′s version of Anteriority). If you have to give up Impartiality + (Separability or Goodsell, 2021′s version of Anteriority) and the arguments that support them, then there doesn’t seem to be much reason left to be a utilitarian of any kind in the first place. You’ll have to give up the formal proofs of utilitarianism that depend on these principles or restrictions of them that are motivated in the same ways.
You can try to make utilitarianism rational by approximating it with a bounded utility function, or applying a bounded function to total welfare and taking that as your utility function, and then maximizing expected utility, but then you undermine the main arguments for utilitarianism in the first place.
Hence, utilitarianism is irrational or self-undermining.
Overall, I wish you’d explain the arguments in the papers you linked better. The one argument you actually wrote in this post was interesting, you should have done more of that!
I did consider doing that, but the post is already pretty long and I didn’t want to spend much more on it. Goodsell, 2021′s proof is simple enough, so you could check out the paper. The proof for Theorem 4 from Russell, 2023 looks trickier. I didn’t get it on my first read, and I haven’t spent the time to actually understand it. EDIT: Also, the proofs aren’t as nice/intuitive/fun or flow as naturally as the money pump argument. They present a sequence of prospects constructed in very specific ways, and give a contradiction (violating of transitivity) when you apply all of the assumptions in the theorem. You just have to check the logic.
For any prospects X1,X2,…, and Y1,Y2,…, and any probabilities p1,p2,… that sum to one, if X1≲Y1,X2≲Y2,…, then
∑ipiXi≲∑ipiYi
If furthermore Xj<Yj for some j such that pj>0, then
∑ipiXi<∑ipiYi
Then they write:
Improper prospects clash directly with Countable Independence. Suppose X is a prospect that assigns probabilities p1,p2,… to outcomes x1,x2,… . We can think of X as a countable mixture in two different ways. First, it is a mixture of the one-outcome prospects x1,x2,… in the obvious way. Second, it is also a mixture of infinitely many copies of X itself. If X is improper, this means that X is strictly better than each outcome xi . But then Countable Independence would require that X is strictly better than X. (The argument proceeds the same way if X is strictly worse than each outcome xi instead.)
Based on your explanation in this comment, it seems to me that St. Petersburg-like prospects don’t actually invalidate utilitarian ethics as it would have been understood by e.g. Bentham, but it does contradict the existence of a real-valued utility function. It can still be true that welfare is the only thing that matters, and that the value of welfare aggregates linearly. It’s not clear how to choose when a decision has multiple options with infinite expected utility (or an option that has infinite positive EV plus infinite negative EV), but I don’t think these theorems imply that there cannot be any decision criterion that’s consistent with the principles of utilitarianism. (At the same time, I don’t know what the decision criterion would actually be.) Perhaps you could have a version of Bentham-esque utilitarianism that uses a real-valued utility function for finite values, and uses some other decision procedure for infinite values.
Ya, I don’t think utilitarian ethics is invalidated, it’s just that we don’t really have much reason to be utilitarian specifically anymore (not that there are necessarily much more compelling reasons for other views). Why sum welfare and not combine them some other way? I guess there’s still direct intuition: two of a good thing is twice as good as just one of them. But I don’t see how we could defend that or utilitarianism in general any further in a way that isn’t question-begging and doesn’t depend on arguments that undermine utilitarianism when generalized.
You could just take your utility function to be σ(∑Ni=1ui) where σ is any bounded increasing function, say arctan, and maximize the expected value of that. This doesn’t work with actual infinities, but it can handle arbitrary prospects over finite populations. Or, you could just rank prospects by stochastic dominance with respect to the sum of utilities, like Tarsney, 2020.
You can’t extend it the naive way, though, i.e. just maximize E[∑iUi] whenever that’s finite and then do something else when it’s infinite or undefined, though. One of the following would happen: the money pump argument goes through again, you give up stochastic dominance or you give up transitivity, each of which seems irrational. This was my 4th response to Infinities are generally too problematic.
Also, I’d say what I’m considering here isn’t really “infinite ethics”, or at least not what I understand infinite ethics to be, which is concerned with actual infinities, e.g. an infinite universe, infinitely long lives or infinite value. None of the arguments here assume such infinities, only infinitely many possible outcomes with finite (but unbounded) value.
The argument you made that I understood seemed to rest on allowing for an infinite expectation to occur, which seems pretty related to me to infinite ethics, though I’m no ethicist.
The argument can be generalized without using infinite expectations, and instead using violations of Limitedness in Russell and Isaacs, 2021 or reckless preferences in Beckstead and Thomas, 2023. However, intuitively, it involves prospects that look like they should be infinitely valuable or undefinably valuable relative to the things they’re made up of. Any violation of (the countable extension of) the Archimedean Property/continuity is going to look like you have some kind of infinity.
The issue could just be a categorization thing. I don’t think philosophers would normally include this in “infinite ethics”, because it involves no actual infinities out there in the world.
I don’t understand this part of your argument. Can you explain how you imagine this proof working?
Otherwise, it seems like most of your arguments come down to showing that lots of paradoxes happen when you do math to infinite ethics.
There are many arguments on LessWrong for, and against infinite ethics. I don’t think any, including this one, actually show “utilitarianism is irrational or self-undermining”. For example, as you came close to saying in your responses, you could just have bounded utility functions! That ends up being rational, and seems not self-undermining because after looking at many of these arguments it seems like maybe you’re kinda forced to.
I think there’s also some work on using hyper-reals or other generalizations to quantify infinities, and solving various problems that way.
Overall, I wish you’d explain the arguments in the papers you linked better. The one argument you actually wrote in this post was interesting, you should have done more of that!
Thanks for the comment!
St Petersburg-like prospects (finite actual utility for each possible outcome, but infinite expected utility, or generalizations of them) violate extensions of each of these axioms to countably many possible outcomes:
The continuity/Archimedean axiom: if A and B have finite expected utility, and A < B, there’s no strict mixture of A and an infinite expected utility St Petersburg prospect, like pA+(1−p)StPetersburg, 0<p<1, that’s equivalent to B, because all such strict mixtures will have infinite expected utility. Now, you might not have defined expected utility yet, but this kind of argument would generalize: you can pick A and B to be outcomes of the St Petersburg prospect, and any strict mixture with A will be better than B.
The Independence axiom: see the following footnote.[2]
The Sure-Thing Principle: in the money pump argument in my post, B-$100 is strictly better than each outcome of A, but A is strictly better than B-$100. EDIT: Actually, you can just compare A with B.
I think these axioms are usually stated only for prospects for finitely many possible outcomes, but the arguments for the finitary versions, like specific money pump arguments, would apply equally (possibly with tiny modifications that wouldn’t undermine them) to the countable versions. Or, at least, that’s the claim of Russell and Isaacs, 2021, which they illustrate with a few arguments and briefly describe some others that would generalize. I reproduced their money pump argument in the post.
Ya, I agree that would be rational. I don’t think having a bounded utility function is in itself self-undermining (and I don’t say so), but it would undermine utilitarianism, because it wouldn’t satisfy Impartiality + (Separability or Goodsell, 2021′s version of Anteriority). If you have to give up Impartiality + (Separability or Goodsell, 2021′s version of Anteriority) and the arguments that support them, then there doesn’t seem to be much reason left to be a utilitarian of any kind in the first place. You’ll have to give up the formal proofs of utilitarianism that depend on these principles or restrictions of them that are motivated in the same ways.
You can try to make utilitarianism rational by approximating it with a bounded utility function, or applying a bounded function to total welfare and taking that as your utility function, and then maximizing expected utility, but then you undermine the main arguments for utilitarianism in the first place.
Hence, utilitarianism is irrational or self-undermining.
I did consider doing that, but the post is already pretty long and I didn’t want to spend much more on it. Goodsell, 2021′s proof is simple enough, so you could check out the paper. The proof for Theorem 4 from Russell, 2023 looks trickier. I didn’t get it on my first read, and I haven’t spent the time to actually understand it. EDIT: Also, the proofs aren’t as nice/intuitive/fun or flow as naturally as the money pump argument. They present a sequence of prospects constructed in very specific ways, and give a contradiction (violating of transitivity) when you apply all of the assumptions in the theorem. You just have to check the logic.
You could refuse to define the expected utilility, but the argument generalizes
Russell and Isaacs, 2021 define Countable Independence as follows:
∑ipiXi≲∑ipiYi∑ipiXi<∑ipiYiThen they write:
Based on your explanation in this comment, it seems to me that St. Petersburg-like prospects don’t actually invalidate utilitarian ethics as it would have been understood by e.g. Bentham, but it does contradict the existence of a real-valued utility function. It can still be true that welfare is the only thing that matters, and that the value of welfare aggregates linearly. It’s not clear how to choose when a decision has multiple options with infinite expected utility (or an option that has infinite positive EV plus infinite negative EV), but I don’t think these theorems imply that there cannot be any decision criterion that’s consistent with the principles of utilitarianism. (At the same time, I don’t know what the decision criterion would actually be.) Perhaps you could have a version of Bentham-esque utilitarianism that uses a real-valued utility function for finite values, and uses some other decision procedure for infinite values.
Ya, I don’t think utilitarian ethics is invalidated, it’s just that we don’t really have much reason to be utilitarian specifically anymore (not that there are necessarily much more compelling reasons for other views). Why sum welfare and not combine them some other way? I guess there’s still direct intuition: two of a good thing is twice as good as just one of them. But I don’t see how we could defend that or utilitarianism in general any further in a way that isn’t question-begging and doesn’t depend on arguments that undermine utilitarianism when generalized.
You could just take your utility function to be σ(∑Ni=1ui) where σ is any bounded increasing function, say arctan, and maximize the expected value of that. This doesn’t work with actual infinities, but it can handle arbitrary prospects over finite populations. Or, you could just rank prospects by stochastic dominance with respect to the sum of utilities, like Tarsney, 2020.
You can’t extend it the naive way, though, i.e. just maximize E[∑iUi] whenever that’s finite and then do something else when it’s infinite or undefined, though. One of the following would happen: the money pump argument goes through again, you give up stochastic dominance or you give up transitivity, each of which seems irrational. This was my 4th response to Infinities are generally too problematic.
Also, I’d say what I’m considering here isn’t really “infinite ethics”, or at least not what I understand infinite ethics to be, which is concerned with actual infinities, e.g. an infinite universe, infinitely long lives or infinite value. None of the arguments here assume such infinities, only infinitely many possible outcomes with finite (but unbounded) value.
The argument you made that I understood seemed to rest on allowing for an infinite expectation to occur, which seems pretty related to me to infinite ethics, though I’m no ethicist.
The argument can be generalized without using infinite expectations, and instead using violations of Limitedness in Russell and Isaacs, 2021 or reckless preferences in Beckstead and Thomas, 2023. However, intuitively, it involves prospects that look like they should be infinitely valuable or undefinably valuable relative to the things they’re made up of. Any violation of (the countable extension of) the Archimedean Property/continuity is going to look like you have some kind of infinity.
The issue could just be a categorization thing. I don’t think philosophers would normally include this in “infinite ethics”, because it involves no actual infinities out there in the world.