Maybe I’m misreading this exchange, but there seems to be some confusion between individual utility functions and utilitarianism as an ethical system. An individual utility function as per von Neumann and Morgenstern is defined only up to a constant term and multiplication by a positive factor. Individual vN-M utility functions therefore cannot be compared, aggregated, or averaged across individuals, which is what any flavor of utilitarianism requires one way or another (and which invariably leads into nonsense, in my opinion).
It’s only preference utilitarianism that aggregates individual vN-M utility functions. Other kinds of utilitarianism can use other measures of quality of life, such as pleasure minus pain; these measures have their own difficulties, but they don’t have this particular difficulty.
You’re right, it’s not true that all sorts of utilitarianism require aggregation of vN-M utility functions. That was an imprecise statement on my part. However, as far as I can tell, any sort of utilitarianism requires comparing, adding, or averaging of some measure of utility across individuals, and I’m not aware of any such measure for which this is more meaningful than for the vN-M utility functions. (If you know of any examples, I’d be curious to hear them.)
Individual vN-M utility functions therefore cannot be compared, aggregated,
or averaged across individuals, which is what any flavor of utilitarianism requires
one way or another (and which invariably leads into nonsense, in my opinion).
Estimates of individual utility functions can be averaged, if you do it right, so far as I can tell. A possible estimate of everybody’s utility is a computable function that given a person id and the person’s circumstances, returns a rational number in the interval [0,1]. Discard the computable functions inconsistent with observed behavior of people. Average over all remaining possibilities weighing by the universal prior, thus giving you an estimated utility for each person in the range [0, 1]. We’re estimating utilities for humans, not arbitrary hypothetical creatures, so there’s an approximate universal minimum utility (torturing you and everyone you care about to death) and an approximate maximum utility (you get everything you want). We’re estimating everybody’s utility with one function, so an estimate that says that I don’t like to be tortured will be simpler than one that doesn’t even if I have never been tortured, because other people have attempted to avoid torture.
Does that proposal make sense? (I’m concerned that I may have been too brief.)
Does anything obvious break if you average these across humans?
As far as I see, your proposal is well-defined and consistent. However, even if we ignore all the intractable problems with translating it into any practical answers about concrete problems (of which I’m sure you’re aware), this is still only one possible way to aggregate and compare utilities interpersonally, with no clear reason why you would use it instead of some other one that would favor and disfavor different groups and individuals.
I agree with you that my proposed scheme is computationally intractable, and that it has other issues too. IMO the other issues can be fixed and I hope to get feedback on a completed version at some point. Assuming the fixes are good, we’d then have an unimplementable specification of a way to fairly balance the interests of different people, and a next step would be to look for some implementable approximation to it. That would be an improvement over not having a specification, right?
...this is still only one possible way to aggregate and compare utilities interpersonally, with no clear reason why you would use it instead of some other one that would favor and disfavor different groups and individuals.
The implied principle here seems to be that if we can’t find a unique way to balance the interests of different people, we shouldn’t do it at all. I believe there are multiple plausible schemes, so we will be paralyzed as long as we refuse to pick one and continue. There is precedent for this—many cultural norms are arbitrary, for example.
I wish I actually had multiple plausible schemes to consider. I can think of some with obvious bugs, but it doesn’t seem worthwhile to list them here. I could also make a trivial change by proposing unfair weights (maybe my utility gets a weight of 1.1 in the average and everyone else gets a weight of 1, for example). If anybody can propose an interestingly different alternative, I’d love to hear it.
Also, if I incorrectly extracted the principle behind the parent post, I’d like to be corrected.
Maybe I’m misreading this exchange, but there seems to be some confusion between individual utility functions and utilitarianism as an ethical system. An individual utility function as per von Neumann and Morgenstern is defined only up to a constant term and multiplication by a positive factor. Individual vN-M utility functions therefore cannot be compared, aggregated, or averaged across individuals, which is what any flavor of utilitarianism requires one way or another (and which invariably leads into nonsense, in my opinion).
It’s only preference utilitarianism that aggregates individual vN-M utility functions. Other kinds of utilitarianism can use other measures of quality of life, such as pleasure minus pain; these measures have their own difficulties, but they don’t have this particular difficulty.
You’re right, it’s not true that all sorts of utilitarianism require aggregation of vN-M utility functions. That was an imprecise statement on my part. However, as far as I can tell, any sort of utilitarianism requires comparing, adding, or averaging of some measure of utility across individuals, and I’m not aware of any such measure for which this is more meaningful than for the vN-M utility functions. (If you know of any examples, I’d be curious to hear them.)
Estimates of individual utility functions can be averaged, if you do it right, so far as I can tell. A possible estimate of everybody’s utility is a computable function that given a person id and the person’s circumstances, returns a rational number in the interval [0,1]. Discard the computable functions inconsistent with observed behavior of people. Average over all remaining possibilities weighing by the universal prior, thus giving you an estimated utility for each person in the range [0, 1]. We’re estimating utilities for humans, not arbitrary hypothetical creatures, so there’s an approximate universal minimum utility (torturing you and everyone you care about to death) and an approximate maximum utility (you get everything you want). We’re estimating everybody’s utility with one function, so an estimate that says that I don’t like to be tortured will be simpler than one that doesn’t even if I have never been tortured, because other people have attempted to avoid torture.
Does that proposal make sense? (I’m concerned that I may have been too brief.)
Does anything obvious break if you average these across humans?
As far as I see, your proposal is well-defined and consistent. However, even if we ignore all the intractable problems with translating it into any practical answers about concrete problems (of which I’m sure you’re aware), this is still only one possible way to aggregate and compare utilities interpersonally, with no clear reason why you would use it instead of some other one that would favor and disfavor different groups and individuals.
Analysis paralysis is one path to defeat.
I agree with you that my proposed scheme is computationally intractable, and that it has other issues too. IMO the other issues can be fixed and I hope to get feedback on a completed version at some point. Assuming the fixes are good, we’d then have an unimplementable specification of a way to fairly balance the interests of different people, and a next step would be to look for some implementable approximation to it. That would be an improvement over not having a specification, right?
The implied principle here seems to be that if we can’t find a unique way to balance the interests of different people, we shouldn’t do it at all. I believe there are multiple plausible schemes, so we will be paralyzed as long as we refuse to pick one and continue. There is precedent for this—many cultural norms are arbitrary, for example.
I wish I actually had multiple plausible schemes to consider. I can think of some with obvious bugs, but it doesn’t seem worthwhile to list them here. I could also make a trivial change by proposing unfair weights (maybe my utility gets a weight of 1.1 in the average and everyone else gets a weight of 1, for example). If anybody can propose an interestingly different alternative, I’d love to hear it.
Also, if I incorrectly extracted the principle behind the parent post, I’d like to be corrected.