Combining the two doesn’t solve the ‘biggest problems of utilitarianism’:
1) We know from Arrhenius’s impossibility theorems you cannot get an axiology which can avoid the repugnant conclusion without incurring other large costs (e.g. violations of transitivity, dependence of irrelevant alternatives). Although you don’t spell out ‘balance utilitarianism’ enough to tell what it violates, we know it—like any other population axiology—will have very large drawbacks.
2) ‘Balance utilitarianism’ seems a long way from the frontier of ethical theories in terms of its persuasiveness as a population ethic.
a) The write-up claims that actions that only actions that increase sum and median wellbeing are good, those that increase one or the other are sub-optimal, and those that decrease both are bad. Yet what if we face choices where we don’t have an option that increases both sum and median welfare (such as Parfit’s ‘mere addition’), and we have to choose between them? How do we balance one against the other? The devil is in these details, and a theory being silent on these cases shouldn’t be counted in its favour.
b) Yet even as it stands we can construct nasty counter-examples to the rule, based on very benign versions of mere addition. Suppose Alice is in her own universe at 10 welfare (benchmark this as a very happy life). She can press button A or button B. Button A boosts her up to 11 welfare. Button B boosts her to 10^100 welfare, and brings into existence 10^100 people at (10-10^-100) welfare (say a life as happy as Alice but with a pinprick). Balance utilitarianism recommends button A (as it increases total and median) as good, but pressing button B as suboptimal. Yet pressing button B is much better for Alice, and also instantiates vast numbers of happy people.
c) The ‘median criterion’ is going to be generally costly, as it is insensitive to changing cardinal levels outside the median person/pair so long as ordering is unchanged (and vice-versa).
d) Median views (like average ones) also incur costs due to their violation of separability. It seems intuitive that the choiceworthiness of our actions shouldn’t depend on whether there is an alien population on Alpha Centauri who are happier/sadder than we are (e.g. if there’s lots of them and they’re happier, any act that brings more humans into existence is ‘suboptimal’ by the lights of balance util).
Combining the two doesn’t solve the ‘biggest problems of utilitarianism’:
1) We know from Arrhenius’s impossibility theorems you cannot get an axiology which can avoid the repugnant conclusion without incurring other large costs (e.g. violations of transitivity, dependence of irrelevant alternatives). Although you don’t spell out ‘balance utilitarianism’ enough to tell what it violates, we know it—like any other population axiology—will have very large drawbacks.
2) ‘Balance utilitarianism’ seems a long way from the frontier of ethical theories in terms of its persuasiveness as a population ethic.
a) The write-up claims that actions that only actions that increase sum and median wellbeing are good, those that increase one or the other are sub-optimal, and those that decrease both are bad. Yet what if we face choices where we don’t have an option that increases both sum and median welfare (such as Parfit’s ‘mere addition’), and we have to choose between them? How do we balance one against the other? The devil is in these details, and a theory being silent on these cases shouldn’t be counted in its favour.
b) Yet even as it stands we can construct nasty counter-examples to the rule, based on very benign versions of mere addition. Suppose Alice is in her own universe at 10 welfare (benchmark this as a very happy life). She can press button A or button B. Button A boosts her up to 11 welfare. Button B boosts her to 10^100 welfare, and brings into existence 10^100 people at (10-10^-100) welfare (say a life as happy as Alice but with a pinprick). Balance utilitarianism recommends button A (as it increases total and median) as good, but pressing button B as suboptimal. Yet pressing button B is much better for Alice, and also instantiates vast numbers of happy people.
c) The ‘median criterion’ is going to be generally costly, as it is insensitive to changing cardinal levels outside the median person/pair so long as ordering is unchanged (and vice-versa).
d) Median views (like average ones) also incur costs due to their violation of separability. It seems intuitive that the choiceworthiness of our actions shouldn’t depend on whether there is an alien population on Alpha Centauri who are happier/sadder than we are (e.g. if there’s lots of them and they’re happier, any act that brings more humans into existence is ‘suboptimal’ by the lights of balance util).