Very clear thinking, thanks for writing that! I think the desire view is the only one that matters, and the right way to aggregate desires of many people is by negotiation (real or simulated). CEV is a proposal along these lines, though it’s a huge research problem and nowhere near solved. Anyway, since we don’t have a superpowered AI or a million years to negotiate everything, we should probably pick a subset of desires that don’t need much negotiating (e.g. everyone wants to be healthy but few people want others to be sick) and point effective charity at that. Not sure the hedonic view should ever be used—you’re right that it has unfixable problems.
I feel like “negotiation” is very handwavey. Can you explain what that looks like in a simple zero-sum situation? For example, suppose that you can either save the lives of the family of 5 that I described above, or else save 20 loners who have no strong relationships; assume every individual has an equally strong desire to remain alive. How do we actually aggregate all their desires, without the problem of double counting?
The reason I think hedonic views are important is because desires can be arbitrarily weird. I don’t want to endorse as moral a parent who raises their child with only one overwhelmingly strong desire—that the sky remains blue. Is that child’s well-being therefore much higher than anyone else’s, since everyone else has had some of their desires thwarted? More generally, I don’t think a “desire” is a particularly well-defined concept, and wouldn’t want it to be my main moral foundation.
If everyone in the family has X-strong desire that the family should live, and every loner has Y-strong desire to live, I’d save the family iff 5X>20Y. Does that make sense?
It makes sense, but I find it very counterintuitive, partly because it’s not obvious to me whether the concept of “measuring desire” makes sense. Here are two ways that I might measure whether people have a stronger desire for A or B:
1) I hook up a brainwave reader to each person, and see how strongly/emotional/determined they feel about outcome A vs outcome B.
2) I ask each person whether they would swap outcome A for outcome B.
In the first case, it’s plausible to me that each person’s emotions are basically maxed out at the thought of either their own death, or their family’s death (since we know people are very bad at having emotions which scale appropriately with numbers). So then X = Y, and you save the 20 people.
In the second case, assume that each person involved desires to continue living, personally, at about the same strength S. But then you ask each member of the family whether they’d swap that for someone else in their family surviving, and they’d say yes. So therefore each member of the family has total desire > 5S that their family survives, whereas each loner has desire S to survive themselves, and so you save the family.
Which one is closer to your view of measuring desire? 2 seems more intuitive to me, because it matches the decisions we’d actually make, but then I find the conclusion that it’s more moral to save the family very strange.
The first case is closer to my view, but it’s not about emotions getting maxed out. It’s more about “voting power” being equalized between people, so you can’t get a billion times more voting power by caring about a billion people. You only get a fixed amount of votes to spread between outcomes. That’s how I imagine negotiation to work, though it’s still very handwavey.
Okay, but now you’ve basically defined “increasing utility” out of existence? If voting power is roughly normalised, then it’s roughly equally important to save the life of an immensely happy, satisfied teenager with a bright future, and a nearly-suicidal retiree who’s going to die soon anyway, as long as staying alive is the strongest relevant desire for both. In fact, it’s even worse: assuming the teenager has a strong unreciprocated crush, then I can construct situations where only 1⁄2 of their voting power will go towards saving themselves, so their life is effectively half as valuable as a loner.
Very clear thinking, thanks for writing that! I think the desire view is the only one that matters, and the right way to aggregate desires of many people is by negotiation (real or simulated). CEV is a proposal along these lines, though it’s a huge research problem and nowhere near solved. Anyway, since we don’t have a superpowered AI or a million years to negotiate everything, we should probably pick a subset of desires that don’t need much negotiating (e.g. everyone wants to be healthy but few people want others to be sick) and point effective charity at that. Not sure the hedonic view should ever be used—you’re right that it has unfixable problems.
I feel like “negotiation” is very handwavey. Can you explain what that looks like in a simple zero-sum situation?
For example, suppose that you can either save the lives of the family of 5 that I described above, or else save 20 loners who have no strong relationships; assume every individual has an equally strong desire to remain alive. How do we actually aggregate all their desires, without the problem of double counting?
The reason I think hedonic views are important is because desires can be arbitrarily weird. I don’t want to endorse as moral a parent who raises their child with only one overwhelmingly strong desire—that the sky remains blue. Is that child’s well-being therefore much higher than anyone else’s, since everyone else has had some of their desires thwarted? More generally, I don’t think a “desire” is a particularly well-defined concept, and wouldn’t want it to be my main moral foundation.
If everyone in the family has X-strong desire that the family should live, and every loner has Y-strong desire to live, I’d save the family iff 5X>20Y. Does that make sense?
It makes sense, but I find it very counterintuitive, partly because it’s not obvious to me whether the concept of “measuring desire” makes sense. Here are two ways that I might measure whether people have a stronger desire for A or B:
1) I hook up a brainwave reader to each person, and see how strongly/emotional/determined they feel about outcome A vs outcome B.
2) I ask each person whether they would swap outcome A for outcome B.
In the first case, it’s plausible to me that each person’s emotions are basically maxed out at the thought of either their own death, or their family’s death (since we know people are very bad at having emotions which scale appropriately with numbers). So then X = Y, and you save the 20 people.
In the second case, assume that each person involved desires to continue living, personally, at about the same strength S. But then you ask each member of the family whether they’d swap that for someone else in their family surviving, and they’d say yes. So therefore each member of the family has total desire > 5S that their family survives, whereas each loner has desire S to survive themselves, and so you save the family.
Which one is closer to your view of measuring desire? 2 seems more intuitive to me, because it matches the decisions we’d actually make, but then I find the conclusion that it’s more moral to save the family very strange.
The first case is closer to my view, but it’s not about emotions getting maxed out. It’s more about “voting power” being equalized between people, so you can’t get a billion times more voting power by caring about a billion people. You only get a fixed amount of votes to spread between outcomes. That’s how I imagine negotiation to work, though it’s still very handwavey.
Okay, but now you’ve basically defined “increasing utility” out of existence? If voting power is roughly normalised, then it’s roughly equally important to save the life of an immensely happy, satisfied teenager with a bright future, and a nearly-suicidal retiree who’s going to die soon anyway, as long as staying alive is the strongest relevant desire for both. In fact, it’s even worse: assuming the teenager has a strong unreciprocated crush, then I can construct situations where only 1⁄2 of their voting power will go towards saving themselves, so their life is effectively half as valuable as a loner.
I don’t think that’s a big problem as long as there are enough people like you, whose altruistic desires slightly favor the teenager.