So S is not utilitarian, right? (At least in your example). So your point is that it’s possible to have an agent that accepts the repugnant conclusion but agrees with our intuitions in more realistic cases? Well, sure, but that’s not really a defense of total utilitarianism unless you can actually make it work in the case where S is total utilitarianism.
I find something like average times log of total to be far more intuitive than either average or total. Is this kind of utility function discussed much in the literature?
As far as I know, I made it up. But there may be similar ones (and the idea of intermediates between average and total is discussed in the literature).
So S is not utilitarian, right? (At least in your example). So your point is that it’s possible to have an agent that accepts the repugnant conclusion but agrees with our intuitions in more realistic cases? Well, sure, but that’s not really a defense of total utilitarianism unless you can actually make it work in the case where S is total utilitarianism.
S is utilitarian, in the sense of maximising a utility function. S is not total utilitarian or average utilitarian, however.
I find something like average times log of total to be far more intuitive than either average or total. Is this kind of utility function discussed much in the literature?
As far as I know, I made it up. But there may be similar ones (and the idea of intermediates between average and total is discussed in the literature).
Ah, thanks!