If we should have preference ordering R, then R is rational (morality presumably does not require irrationality).
I think human behaviour is straight-up irrational, but I want to specify principles of social choice nonetheless. i.e. the motivation is to resolve carlsmith’s On the limits of idealized values.
now, if human behaviour is irrational (e.g. intransitive, incomplete, nonconsequentialist, imprudent, biased, etc), then my social planner (following LELO, or other aggregative principles) will be similarly irrational. this is pretty rough for aggregativism; I list it was the most severe objection, in section 3.1.
but to the extent that human behaviour is irrational, then the utilitarian principles (total, average, Rawls’ minmax) have a pretty rough time also, because they appeal to a personal utility function v:P→R to add/average/minimise. idk where they get that if humans are irrational.
maybe you the utilitarian can say: “well, first we apply some idealisation procedure to human behaviour, to remove the irrationalities, and then extract a personal utility function, and then maximise the sum/average/minimum of the personal utility function”
but, if provided with a reasonable idealisation procedure, the aggregativist can play the same move: “well, first we apply the idealisation procedure to human behaviour, to remove the irrationalities, and then run LELO/HL/ROI using that idealised model of human behaviour.” i discuss this move in 3.2, but i’m wary about it. like, how alien is this idealised human? why does it have any moral authority? what if it’s just ‘gone off the rails’ so to speak?
it is a bit unclear how to ground discounting in LELO, because doing so requires that one specifies the order in which lives are concatenated and I am not sure there is a non-arbitrary way of doing so.
macaskill orders the population by birth date. this seems non-arbitrary-ish(?);[1] it gives the right result wrt to our permutation-dependent values; and anything else is subject to egyptologist objections, where to determine whether we should choose future A over B, we need to first check the population density of ancient egypt.
Loren sidesteps this the order-dependence of LELO with (imo) an unrealistically strong rationality condition.
i’m wary about it. like, how alien is this idealised human? why does it have any moral authority?
I don’t have great answers to these metaethical questions. Conditional on normative realism, it seems plausible to me that first-order normative views must satisfy the vNM axioms. Conditional on normative antirealism, I agree it is less clear that first-order normative views must satisfy the vNM axioms, but this is just a special case of it being hard to justify any normative views under normative antirealism.
In any case, I suspect that we are close to reaching bedrock in this discussion, so perhaps this is a good place to end the discussion.
I think human behaviour is straight-up irrational, but I want to specify principles of social choice nonetheless. i.e. the motivation is to resolve carlsmith’s On the limits of idealized values.
now, if human behaviour is irrational (e.g. intransitive, incomplete, nonconsequentialist, imprudent, biased, etc), then my social planner (following LELO, or other aggregative principles) will be similarly irrational. this is pretty rough for aggregativism; I list it was the most severe objection, in section 3.1.
but to the extent that human behaviour is irrational, then the utilitarian principles (total, average, Rawls’ minmax) have a pretty rough time also, because they appeal to a personal utility function v:P→R to add/average/minimise. idk where they get that if humans are irrational.
maybe you the utilitarian can say: “well, first we apply some idealisation procedure to human behaviour, to remove the irrationalities, and then extract a personal utility function, and then maximise the sum/average/minimum of the personal utility function”
but, if provided with a reasonable idealisation procedure, the aggregativist can play the same move: “well, first we apply the idealisation procedure to human behaviour, to remove the irrationalities, and then run LELO/HL/ROI using that idealised model of human behaviour.” i discuss this move in 3.2, but i’m wary about it. like, how alien is this idealised human? why does it have any moral authority? what if it’s just ‘gone off the rails’ so to speak?
macaskill orders the population by birth date. this seems non-arbitrary-ish(?);[1] it gives the right result wrt to our permutation-dependent values; and anything else is subject to egyptologist objections, where to determine whether we should choose future A over B, we need to first check the population density of ancient egypt.
Loren sidesteps this the order-dependence of LELO with (imo) an unrealistically strong rationality condition.
if you’re worried about relativistic effects then use the reference frame of the social planner
Thanks!
I don’t have great answers to these metaethical questions. Conditional on normative realism, it seems plausible to me that first-order normative views must satisfy the vNM axioms. Conditional on normative antirealism, I agree it is less clear that first-order normative views must satisfy the vNM axioms, but this is just a special case of it being hard to justify any normative views under normative antirealism.
In any case, I suspect that we are close to reaching bedrock in this discussion, so perhaps this is a good place to end the discussion.