How about a more reasonable scenario then: for fixed resources, total utilitarians (and average ones, in fact) would be in favour of killing the least happy members of society to let them be replaced with happier ones, so far as this is possible (and if they designed a government, they would do their upmost to ensure this is possible). In fact, they’d want to replace them with happier people who don’t mind being killed or having their friends killed, as that makes it easier to iterate the process.
Also, total utilitarians (but not average ones) would be in favour of killing the least efficient members of society (in terms of transforming resources into happiness) to let them be replaced with more efficient ones.
Now, practical considerations may preclude being able to do this. But a genuine total utilitarian must be filled with a burning wish, if only it were possible, to kill off so many people and replace them in this ideal way. If only there were a way...
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
must be filled with a burning wish
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.
How about a more reasonable scenario then: for fixed resources, total utilitarians (and average ones, in fact) would be in favour of killing the least happy members of society to let them be replaced with happier ones, so far as this is possible (and if they designed a government, they would do their upmost to ensure this is possible). In fact, they’d want to replace them with happier people who don’t mind being killed or having their friends killed, as that makes it easier to iterate the process.
Also, total utilitarians (but not average ones) would be in favour of killing the least efficient members of society (in terms of transforming resources into happiness) to let them be replaced with more efficient ones.
Now, practical considerations may preclude being able to do this. But a genuine total utilitarian must be filled with a burning wish, if only it were possible, to kill off so many people and replace them in this ideal way. If only there were a way...
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.