(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
must be filled with a burning wish
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.