I only skimmed the post, so I may have missed something, but it seems to me that this post underemphasizes the fact that both Harsanyi’s Lottery and LELO imply utilitarianism under plausible assumptions about rationality. For example, if the social planner satisfies the vNM axioms of expected utility theory, then Harsanyi’s Lottery implies that the social planner is utilitarian with respect to expected utilities (Harsanyi 1953). Likewise, if the social planner’s intertemporal preferences satisfy a set of normatively plausible axioms, then LELO implies that the social planner is utilitarian with respect to experienced utilities (Fryxell 2024). In my view, it is therefore not clear that it makes sense to compare LELO and Harsanyi’s Lottery with utilitarianism.
Also, at least some of the advantages of aggregativism that you mention are easily incorporated into utilitarianism. For example, what is achieved by adopting LELO with exponential time-discounting in Section 2.5.1 can also be achieved by adopting discounted utilitarianism (rather than unweighted total utilitarianism).
A final tiny comment: LELO has a long history, going back to at least C.I. Lewis’s ” An Analysis of Knowledge and Valuation”, though the term “LELO” was coined by my colleague Loren Fryxell (Fryxell 2024). It’s probably worth adding citations to these.
I only skimmed the post, so I may have missed something, but it seems to me that this post underemphasizes the fact that both Harsanyi’s Lottery and LELO imply utilitarianism under plausible assumptions about rationality.
i kinda see aggregativism vs utilitarianism as a bundle of claims of the following form:
humans aren’t perfectly consequentialist, and aggregativism answers the question “how consequentialist should our moral theory be?” with “exactly as consequentialist as self-interested humans are.”
humans have an inaction bias, and aggregativism answers the question “how inaction-biased should our moral theory be?” with “exactly as inaction-biased as self-interested humans are.”
humans are time-discounting, and aggregativism answers the question “how time-discounting should our moral theory be?” with “exactly as time-discounting as self-interested humans are.”
humans are risk-averse, and aggregativism answers the question “how risk-averse should our moral theory be?” with “exactly as risk-averse as self-interested humans are.”
and so on
the purpose of the social zeta function ζ:S→P is simply to map social outcomes (the object of our moral attitudes) to personal outcomes (the object the self-interested human’s attitudes) so this bundle of claims type-checks.
Also, at least some of the advantages of aggregativism that you mention are easily incorporated into utilitarianism. For example, what is achieved by adopting LELO with exponential time-discounting in Section 2.5.1 can also be achieved by adopting discounted utilitarianism (rather than unweighted total utilitarianism).
yeah that’s true, two quick thoughts:
i suspect exponential time-discounting was added to total utilitarianism because it’s a good model of self-interested human behaviour. aggregativism says “let’s do this with everything”, i.e. we modify utilitarianism in all the ways that we think self-interested humans behave.
suppose self-interested humans do time-discounting, then LELO would approximate total utilitarianism with discounting in population time, not calender time. that is, a future generation is discounted by the sum of lifetimes of each preceding generation. (if the calendar time for an event is T then the population time for the event is ∫T−∞N(t)dt where N(t) is the population size at time t. I first heard this concept in this Greaves talk.) if you’re gonna adopt discounted utilitarianism, then population-time-discounted utilitarianism makes much more sense to me than calendar-time-discounted utilitarianism, and the fact that LELO gives the right answer here is a case in favour of it.
A final tiny comment: LELO has a long history, going back to at least C.I. Lewis’s ” An Analysis of Knowledge and Valuation”, though the term “LELO” was coined by my colleague Loren Fryxell (Fryxell 2024). It’s probably worth adding citations to these.
I’m not sure why we should combine Harsanyi’s Lottery (or LELO or whatever) with a model of actual human behaviour. Here’s a rough sketch of how I am thinking about it: Morality is about what preference ordering we should have. If we should have preference ordering R, then R is rational (morality presumably does not require irrationality). If R is rational, then R satisfies the vNM axioms. Hence, I think it is sufficient that the vNM axioms work as principles of rationality; they don’t need to describe actual human behaviour in this context.
Regarding your points about two quick thoughts on time-discounting: yes, I basically agree. However, I also want to note that it is a bit unclear how to ground discounting in LELO, because doing so requires that one specifies the order in which lives are concatenated and I am not sure there is a non-arbitrary way of doing so.
If we should have preference ordering R, then R is rational (morality presumably does not require irrationality).
I think human behaviour is straight-up irrational, but I want to specify principles of social choice nonetheless. i.e. the motivation is to resolve carlsmith’s On the limits of idealized values.
now, if human behaviour is irrational (e.g. intransitive, incomplete, nonconsequentialist, imprudent, biased, etc), then my social planner (following LELO, or other aggregative principles) will be similarly irrational. this is pretty rough for aggregativism; I list it was the most severe objection, in section 3.1.
but to the extent that human behaviour is irrational, then the utilitarian principles (total, average, Rawls’ minmax) have a pretty rough time also, because they appeal to a personal utility function v:P→R to add/average/minimise. idk where they get that if humans are irrational.
maybe you the utilitarian can say: “well, first we apply some idealisation procedure to human behaviour, to remove the irrationalities, and then extract a personal utility function, and then maximise the sum/average/minimum of the personal utility function”
but, if provided with a reasonable idealisation procedure, the aggregativist can play the same move: “well, first we apply the idealisation procedure to human behaviour, to remove the irrationalities, and then run LELO/HL/ROI using that idealised model of human behaviour.” i discuss this move in 3.2, but i’m wary about it. like, how alien is this idealised human? why does it have any moral authority? what if it’s just ‘gone off the rails’ so to speak?
it is a bit unclear how to ground discounting in LELO, because doing so requires that one specifies the order in which lives are concatenated and I am not sure there is a non-arbitrary way of doing so.
macaskill orders the population by birth date. this seems non-arbitrary-ish(?);[1] it gives the right result wrt to our permutation-dependent values; and anything else is subject to egyptologist objections, where to determine whether we should choose future A over B, we need to first check the population density of ancient egypt.
Loren sidesteps this the order-dependence of LELO with (imo) an unrealistically strong rationality condition.
i’m wary about it. like, how alien is this idealised human? why does it have any moral authority?
I don’t have great answers to these metaethical questions. Conditional on normative realism, it seems plausible to me that first-order normative views must satisfy the vNM axioms. Conditional on normative antirealism, I agree it is less clear that first-order normative views must satisfy the vNM axioms, but this is just a special case of it being hard to justify any normative views under normative antirealism.
In any case, I suspect that we are close to reaching bedrock in this discussion, so perhaps this is a good place to end the discussion.
Harsanyi’s theorem has also been generalized in various ways without the rationality axioms; see McCarthy et al., 2020 https://doi.org/10.1016/j.jmateco.2020.01.001. But it still assumes something similar to but weaker than the independence axiom, which in my view is hard to motivate separately.
Thanks for writing this!
I only skimmed the post, so I may have missed something, but it seems to me that this post underemphasizes the fact that both Harsanyi’s Lottery and LELO imply utilitarianism under plausible assumptions about rationality. For example, if the social planner satisfies the vNM axioms of expected utility theory, then Harsanyi’s Lottery implies that the social planner is utilitarian with respect to expected utilities (Harsanyi 1953). Likewise, if the social planner’s intertemporal preferences satisfy a set of normatively plausible axioms, then LELO implies that the social planner is utilitarian with respect to experienced utilities (Fryxell 2024). In my view, it is therefore not clear that it makes sense to compare LELO and Harsanyi’s Lottery with utilitarianism.
Also, at least some of the advantages of aggregativism that you mention are easily incorporated into utilitarianism. For example, what is achieved by adopting LELO with exponential time-discounting in Section 2.5.1 can also be achieved by adopting discounted utilitarianism (rather than unweighted total utilitarianism).
A final tiny comment: LELO has a long history, going back to at least C.I. Lewis’s ” An Analysis of Knowledge and Valuation”, though the term “LELO” was coined by my colleague Loren Fryxell (Fryxell 2024). It’s probably worth adding citations to these.
thanks for comments, gustav
the rationality conditions are pretty decent model of human behaviour, but they’re only approximations. you’re right that if the approximation is perfect then aggregativism is mathematically equivalent to utilitarianism, which does render some of these advantages/objections moot. but I don’t know how close the approximations are (that’s an empirical question).
i kinda see aggregativism vs utilitarianism as a bundle of claims of the following form:
humans aren’t perfectly consequentialist, and aggregativism answers the question “how consequentialist should our moral theory be?” with “exactly as consequentialist as self-interested humans are.”
humans have an inaction bias, and aggregativism answers the question “how inaction-biased should our moral theory be?” with “exactly as inaction-biased as self-interested humans are.”
humans are time-discounting, and aggregativism answers the question “how time-discounting should our moral theory be?” with “exactly as time-discounting as self-interested humans are.”
humans are risk-averse, and aggregativism answers the question “how risk-averse should our moral theory be?” with “exactly as risk-averse as self-interested humans are.”
and so on
the purpose of the social zeta function ζ:S→P is simply to map social outcomes (the object of our moral attitudes) to personal outcomes (the object the self-interested human’s attitudes) so this bundle of claims type-checks.
yeah that’s true, two quick thoughts:
i suspect exponential time-discounting was added to total utilitarianism because it’s a good model of self-interested human behaviour. aggregativism says “let’s do this with everything”, i.e. we modify utilitarianism in all the ways that we think self-interested humans behave.
suppose self-interested humans do time-discounting, then LELO would approximate total utilitarianism with discounting in population time, not calender time. that is, a future generation is discounted by the sum of lifetimes of each preceding generation. (if the calendar time for an event is T then the population time for the event is ∫T−∞N(t)dt where N(t) is the population size at time t. I first heard this concept in this Greaves talk.) if you’re gonna adopt discounted utilitarianism, then population-time-discounted utilitarianism makes much more sense to me than calendar-time-discounted utilitarianism, and the fact that LELO gives the right answer here is a case in favour of it.
I mention Loren’s paper in the footnote of Part 1. i’ll cite him in part 2 and 3 also, thanks for the reminder.
I appreciate the reply!
I’m not sure why we should combine Harsanyi’s Lottery (or LELO or whatever) with a model of actual human behaviour. Here’s a rough sketch of how I am thinking about it: Morality is about what preference ordering we should have. If we should have preference ordering R, then R is rational (morality presumably does not require irrationality). If R is rational, then R satisfies the vNM axioms. Hence, I think it is sufficient that the vNM axioms work as principles of rationality; they don’t need to describe actual human behaviour in this context.
Regarding your points about two quick thoughts on time-discounting: yes, I basically agree. However, I also want to note that it is a bit unclear how to ground discounting in LELO, because doing so requires that one specifies the order in which lives are concatenated and I am not sure there is a non-arbitrary way of doing so.
Thanks for engaging!
I think human behaviour is straight-up irrational, but I want to specify principles of social choice nonetheless. i.e. the motivation is to resolve carlsmith’s On the limits of idealized values.
now, if human behaviour is irrational (e.g. intransitive, incomplete, nonconsequentialist, imprudent, biased, etc), then my social planner (following LELO, or other aggregative principles) will be similarly irrational. this is pretty rough for aggregativism; I list it was the most severe objection, in section 3.1.
but to the extent that human behaviour is irrational, then the utilitarian principles (total, average, Rawls’ minmax) have a pretty rough time also, because they appeal to a personal utility function v:P→R to add/average/minimise. idk where they get that if humans are irrational.
maybe you the utilitarian can say: “well, first we apply some idealisation procedure to human behaviour, to remove the irrationalities, and then extract a personal utility function, and then maximise the sum/average/minimum of the personal utility function”
but, if provided with a reasonable idealisation procedure, the aggregativist can play the same move: “well, first we apply the idealisation procedure to human behaviour, to remove the irrationalities, and then run LELO/HL/ROI using that idealised model of human behaviour.” i discuss this move in 3.2, but i’m wary about it. like, how alien is this idealised human? why does it have any moral authority? what if it’s just ‘gone off the rails’ so to speak?
macaskill orders the population by birth date. this seems non-arbitrary-ish(?);[1] it gives the right result wrt to our permutation-dependent values; and anything else is subject to egyptologist objections, where to determine whether we should choose future A over B, we need to first check the population density of ancient egypt.
Loren sidesteps this the order-dependence of LELO with (imo) an unrealistically strong rationality condition.
if you’re worried about relativistic effects then use the reference frame of the social planner
Thanks!
I don’t have great answers to these metaethical questions. Conditional on normative realism, it seems plausible to me that first-order normative views must satisfy the vNM axioms. Conditional on normative antirealism, I agree it is less clear that first-order normative views must satisfy the vNM axioms, but this is just a special case of it being hard to justify any normative views under normative antirealism.
In any case, I suspect that we are close to reaching bedrock in this discussion, so perhaps this is a good place to end the discussion.
Harsanyi’s theorem has also been generalized in various ways without the rationality axioms; see McCarthy et al., 2020 https://doi.org/10.1016/j.jmateco.2020.01.001. But it still assumes something similar to but weaker than the independence axiom, which in my view is hard to motivate separately.