Excellent question! I think that my actual preferences are some combination of selfish and altruistic (and the same is probably true of most people), and DNT only tries to capture the altruistic part. It is therefore interesting to try writing down a model of how selfish utility aggregates with altruistic utility. A simple “agnostic” formula such as a linear combination with fixed coefficients works poorly, because for any given coefficients it’s easy to come up with a hypothetical where it’s either way too selfish or way too altruistic.
I think that it’s more reasonable to model this aggregation as bargaining between two imaginary agents: a selfish agent that only values you and people close to you, and an altruistic agent with impartial (DNT-ish) preferences. This bargaining can work, for example, according to the Kalai-Smorodinksy solution, with the disagreement point being “purely selfish optimization with probability p and purely altruistic optimization with probability 1−p”, where p is a parameter reflecting your personal level of altruism. Of course, the result of bargaining can be expressed as a single “effective” utility function, which is just a linear combination between the two, but the coefficients depend on the prior and strategy space.
Something of the same nature should apply when a group of people act cooperatively. In this case we can imagine bargaining between an agent that only cares about this group and an impartial agent. Even if the group includes all living people, the two agents will be different since the second assigns value to animals and future people as well.
Of course time discounting can make things look different, but I see no moral justification to discount based on time.
Actually I think time discount is justified and necessary. Without time discount, you get a divergent integral over time and utility is undefined. Another question is, what kind of time discount exactly. One possibility I find alluring is using the minimax-regret decision rule for exponential time discount with a half-life that is allowed to vary between something of the order of τ0 to ∞.
That bargaining approach is indeed interesting, thanks.
On discounting, I need to read more. I’m currently looking through Pareto Principles in Infinite Ethics (other useful suggestions welcome). While I can see that a naive approach gives you divergent integrals and undefined utility, it’s not yet clear to me that there’s no approach which doesn’t (without discounting).
If time discounting truly is necessary, then of course no moral justification is required. But to the extent that that’s an open question (which in my mind, it currently is—perhaps because I lack understanding), I don’t see any purely moral justification to time discount. From an altruistic view with a veil of ignorance, it seems to arbitrarily favour some patients over others.
That lack of a moral justification motivates me to double-check that it really is necessary on purely logical/mathematical grounds.
Excellent question! I think that my actual preferences are some combination of selfish and altruistic (and the same is probably true of most people), and DNT only tries to capture the altruistic part. It is therefore interesting to try writing down a model of how selfish utility aggregates with altruistic utility. A simple “agnostic” formula such as a linear combination with fixed coefficients works poorly, because for any given coefficients it’s easy to come up with a hypothetical where it’s either way too selfish or way too altruistic.
I think that it’s more reasonable to model this aggregation as bargaining between two imaginary agents: a selfish agent that only values you and people close to you, and an altruistic agent with impartial (DNT-ish) preferences. This bargaining can work, for example, according to the Kalai-Smorodinksy solution, with the disagreement point being “purely selfish optimization with probability p and purely altruistic optimization with probability 1−p”, where p is a parameter reflecting your personal level of altruism. Of course, the result of bargaining can be expressed as a single “effective” utility function, which is just a linear combination between the two, but the coefficients depend on the prior and strategy space.
It’s interesting to speculate about the relation between this model and multiagent models of the mind.
Something of the same nature should apply when a group of people act cooperatively. In this case we can imagine bargaining between an agent that only cares about this group and an impartial agent. Even if the group includes all living people, the two agents will be different since the second assigns value to animals and future people as well.
Actually I think time discount is justified and necessary. Without time discount, you get a divergent integral over time and utility is undefined. Another question is, what kind of time discount exactly. One possibility I find alluring is using the minimax-regret decision rule for exponential time discount with a half-life that is allowed to vary between something of the order of τ0 to ∞.
That bargaining approach is indeed interesting, thanks.
On discounting, I need to read more. I’m currently looking through Pareto Principles in Infinite Ethics (other useful suggestions welcome). While I can see that a naive approach gives you divergent integrals and undefined utility, it’s not yet clear to me that there’s no approach which doesn’t (without discounting).
If time discounting truly is necessary, then of course no moral justification is required. But to the extent that that’s an open question (which in my mind, it currently is—perhaps because I lack understanding), I don’t see any purely moral justification to time discount. From an altruistic view with a veil of ignorance, it seems to arbitrarily favour some patients over others.
That lack of a moral justification motivates me to double-check that it really is necessary on purely logical/mathematical grounds.