Nobody seems willing to bite the bullet that in fact, if everything possible actually happens, and all parts of the universe are given equal weight, then it is the case that no choice matters. It is the intuition that there is a moral truth which is wrong, not any specific part of it.
To some extent, it boils down to “how do you justify any discount rate if the future is infinite and you weight all parts of it equally”? I think the answer is “you don’t. Infinitesimal value approaches 0 for any individual choice”.
Note that even in a finite universe, I think this is one the two huge problems with utilitarianism. How do you know what is the proper discount rate and timeframe for your choices? (not today’s topic, but to avoid being a tease, the other is: how do you actually assign these values you’re aggregating).
It may well be correct that there are no objective moral truths, but note that none of this stuff is specifically about objective moral truths. You can say the same sort of things about any value system, even if it’s just “what I personally happen to value”. We do, after all, have decisions to make, even if the universe turns out to be infinite.
If it’s only *what I happen to value”, then the questions of discount rate go away—distance from the agent is a fair basis for caring less about some things than others. And with enough uncaring about the far future, it doesn’t matter whether it’s infinite or just very large.
This bypasses the problem, but what you’re left with isn’t Utilitarianism. You’re no longer trying to maximize anything over the entire universe, only your local perceptions.
The questions go away if you personally are happy having a sufficiently rapidly increasing discount for things far away in space or time. But someone may not be happy with that; they may claim that they care equally about everyone (perhaps only in some “far mode” sense of caring) and want to know what they should then do.
The answer might turn out to be: “No, actually, it turns out you literally can’t care equally about everyone and still have any way of making decisions that actually works”. That would be interesting. (I gravely that that is the answer, but it might be.)
I argued for this answer in the discussions about Pascal’s Mugging, and people kept responding, “Maybe we don’t actually have an unbounded utility function, but we want to modify ourselves to have one.”
I don’t want to modify myself in that way, and I don’t think that anyone else does in a coherent way (i.e. I do not believe that they would accept the consequences of their view if they knew them). So if someone can prove that it is not logically consistent in the first place, that would actually be an advantage, from my point of view, since it would prevent people from aiming for it.
It feels to me as if the following things are likely to be true:
If you want your utilities to be real-valued then you can’t value everyone equally in a universe with a countable infinity of people (for reasons analogous to the way you can’t pick one person at random from a universe with a countable infinity of people).
If you allow a more general notion of utilities, you can value everyone equally, but there may be a price to pay (e.g., some pairs of outcomes not being comparable, or not having enough structure for notions like “expected utility” to be defined).
For instance, consider the following construction. We have a countable infinity of possible people (not all necessarily exist). We assume we’ve got a way of assigning utilities to individuals. Now say that a “global utility” means an assignment of a utility to each person (0 for nonexistent people), and put an equivalence relation on global utilities where u~v if you can get from one to the other by changing a finite number of the utilities, by amounts that add up to zero. (Or, maybe better: by changing any number, where {all the changes} is absolutely convergent—i.e., sum of the absolute values is finite—and the sum is zero.)
In this case, you can compute expected utilities “pointwise”, which is nice; swapping two people’s “labels” (or, more generally, permuting finitely many labels) makes no difference to a “global utility”, which is nice; in any world with only finitely many people it’s equivalent to total utilitarianism, which is probably nice; if you increase some utilities and don’t decrease any, you get something strictly better, which is nice; but utilities aren’t always comparable, so in some cases this value system doesn’t know what to do. E.g., if you have disjoint infinite sets A and B of people, { everyone in A gets +1, everyone in B gets −1 } and {everyone in A gets −1, everyone in B gets +1 } are incomparable, which isn’t so nice.
Paper in question: Infinite Ethics. Also LW Wiki Page and a not-particularly-great Reddit thread.
Nobody seems willing to bite the bullet that in fact, if everything possible actually happens, and all parts of the universe are given equal weight, then it is the case that no choice matters. It is the intuition that there is a moral truth which is wrong, not any specific part of it.
To some extent, it boils down to “how do you justify any discount rate if the future is infinite and you weight all parts of it equally”? I think the answer is “you don’t. Infinitesimal value approaches 0 for any individual choice”.
Note that even in a finite universe, I think this is one the two huge problems with utilitarianism. How do you know what is the proper discount rate and timeframe for your choices? (not today’s topic, but to avoid being a tease, the other is: how do you actually assign these values you’re aggregating).
It may well be correct that there are no objective moral truths, but note that none of this stuff is specifically about objective moral truths. You can say the same sort of things about any value system, even if it’s just “what I personally happen to value”. We do, after all, have decisions to make, even if the universe turns out to be infinite.
If it’s only *what I happen to value”, then the questions of discount rate go away—distance from the agent is a fair basis for caring less about some things than others. And with enough uncaring about the far future, it doesn’t matter whether it’s infinite or just very large.
This bypasses the problem, but what you’re left with isn’t Utilitarianism. You’re no longer trying to maximize anything over the entire universe, only your local perceptions.
The questions go away if you personally are happy having a sufficiently rapidly increasing discount for things far away in space or time. But someone may not be happy with that; they may claim that they care equally about everyone (perhaps only in some “far mode” sense of caring) and want to know what they should then do.
The answer might turn out to be: “No, actually, it turns out you literally can’t care equally about everyone and still have any way of making decisions that actually works”. That would be interesting. (I gravely that that is the answer, but it might be.)
I argued for this answer in the discussions about Pascal’s Mugging, and people kept responding, “Maybe we don’t actually have an unbounded utility function, but we want to modify ourselves to have one.”
I don’t want to modify myself in that way, and I don’t think that anyone else does in a coherent way (i.e. I do not believe that they would accept the consequences of their view if they knew them). So if someone can prove that it is not logically consistent in the first place, that would actually be an advantage, from my point of view, since it would prevent people from aiming for it.
It feels to me as if the following things are likely to be true:
If you want your utilities to be real-valued then you can’t value everyone equally in a universe with a countable infinity of people (for reasons analogous to the way you can’t pick one person at random from a universe with a countable infinity of people).
If you allow a more general notion of utilities, you can value everyone equally, but there may be a price to pay (e.g., some pairs of outcomes not being comparable, or not having enough structure for notions like “expected utility” to be defined).
For instance, consider the following construction. We have a countable infinity of possible people (not all necessarily exist). We assume we’ve got a way of assigning utilities to individuals. Now say that a “global utility” means an assignment of a utility to each person (0 for nonexistent people), and put an equivalence relation on global utilities where u~v if you can get from one to the other by changing a finite number of the utilities, by amounts that add up to zero. (Or, maybe better: by changing any number, where {all the changes} is absolutely convergent—i.e., sum of the absolute values is finite—and the sum is zero.)
In this case, you can compute expected utilities “pointwise”, which is nice; swapping two people’s “labels” (or, more generally, permuting finitely many labels) makes no difference to a “global utility”, which is nice; in any world with only finitely many people it’s equivalent to total utilitarianism, which is probably nice; if you increase some utilities and don’t decrease any, you get something strictly better, which is nice; but utilities aren’t always comparable, so in some cases this value system doesn’t know what to do. E.g., if you have disjoint infinite sets A and B of people, { everyone in A gets +1, everyone in B gets −1 } and {everyone in A gets −1, everyone in B gets +1 } are incomparable, which isn’t so nice.
Infinite timeframe, no intrinsic discount rate—discount only due to uncertainty
Your comment seems to have been cut off
Thanks. Edited to remove the start of an incomplete thought.