A utility function does not compel total (or average) utilitarianism
Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don’t think I’ve seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.
There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this… If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism
This is the complete opposite of what I’d understood the point of that argument to be: as I understand it, it’s claimed that the final state is clearly not of high utility, and so there is something wrong with total utilitarianism. Which is fine for what you’re arguing, but you seem to have taken it a bit the wrong way around.
As for the mathematical rigour, there are some very nice impossibility theorems proved by Arrhenius (example) that make the kind of worries exemplified by the repugnant conclusion a lot more precise. They don’t even require the problematic assumptions about utility functions that you point out: they’re just about axiology (ranking possible outcomes). So they’re actually independent problems for utilitarians.
I think a lot of the reason that utilitarians don’t tend to feel terribly worried about the difficulty of interpersonal utility calculations is that we already do them. Every time you decide to let someone else have the last cookie because they’ll enjoy it more, you just did a little IUC. Obviously, it’s pretty unclear how to scale that up, but it gives a strong feeling that it ought to be possible, somehow.
Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don’t think I’ve seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.
This is the complete opposite of what I’d understood the point of that argument to be: as I understand it, it’s claimed that the final state is clearly not of high utility, and so there is something wrong with total utilitarianism. Which is fine for what you’re arguing, but you seem to have taken it a bit the wrong way around.
As for the mathematical rigour, there are some very nice impossibility theorems proved by Arrhenius (example) that make the kind of worries exemplified by the repugnant conclusion a lot more precise. They don’t even require the problematic assumptions about utility functions that you point out: they’re just about axiology (ranking possible outcomes). So they’re actually independent problems for utilitarians.
I think a lot of the reason that utilitarians don’t tend to feel terribly worried about the difficulty of interpersonal utility calculations is that we already do them. Every time you decide to let someone else have the last cookie because they’ll enjoy it more, you just did a little IUC. Obviously, it’s pretty unclear how to scale that up, but it gives a strong feeling that it ought to be possible, somehow.