What would be insane would be to say that because you gave birth, you’re exempt from criticism for the killing
Well, if you gave birth to someone happier than the person you killed, then you’re not as good as the non-killing-birthers, but you’re certainly better than non-killing-non-birthers, and certainly need to be complimented for being better than them… Or alternatly, the non-killing-non-birthers should be told to look up to you. Or serial killers reluctant to reproduce should be offered a free kill in exchange for a few babies.
I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions. The action of giving birth to a happier person is (for total utilitarians) a good action. The action of killing the first person is (for total utilitarians) a bad action. If these two actions are (as they would be in just about any actually-credible scenario) totally unrelated, then what a total utilitarian might do is to praise one of the actions and condemn the other; or tell non-killing-non-birthers to emulate one of those actions but not the other.
The last suggestion is an interesting one, in that it does actually describe a nasty-sounding policy that total utilitarians really might endorse. But if we’re going to appeal to intuition here we’d better make sure that we’re not painting an unrealistic picture (which is the sort of thing that enables the Chinese Room argument to fool some people).
For the nasty-sounding policy actually to be approved by a total utilitarian in a given case, we need to find someone who very much wants to kill people but can successfully be prevented from doing so; who could, if s/he so chose, produce children who would bring something like as much net happiness to the world as the killings remove; who currently chooses not to produce such children but would be willing to do so in exchange for being allowed to kill; and there would need not to be other people capable of producing such children at a substantially lower cost to society. Just about every part of this is (I think) very implausible.
It may be that there are weird possible worlds in which those things happen, in which case indeed a total utilitarian might endorse the policy. But “it is possibly to imagine really weird possible worlds in which this ethical system leads to conclusions that we, living in the quite different actual world, find strange” is not a very strong criticism of an ethical system. I think in fact such criticisms can be applied to just about any ethical system.
I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions.
I think the best way to do this is to “naturalize” all the events involved. Instead of having someone kill or create someone else, imagine the events happened purely because of natural forces.
As it happens, in the case of killing and replacing a person, my intuitions remain the same. If someone is struck by lightning, and a new person pops out of a rock to replace them, my sense is that, on the net, a bad thing has happened, even if the new person has a somewhat better life than the first person. It would have been better if the first person hadn’t been struck by lightning, even if the only way to stop that from happening would also stop the rock from creating the new person.
Unless the new person’s life is a lot better, I think most total utilitarians would and should agree with you. Much of the utility associated with a person’s life happens in other people’s lives. If you get struck by lightning, others might lose a spouse, a parent, a child, a friend, a colleague, a teacher, etc. Some things that have been started might never be finished. For this + replacement to be a good thing just on account of your replacement’s better life, the replacement’s life would need to be sufficiently better than yours to outweigh all those things. I would in general expect that to be hard.
Obviously the further we get away from familiar experiences the less reliable our intuitions are. But I think my intuition remains the same, even if the person in question is a hermit in some wilderness somewhere.
How about a more reasonable scenario then: for fixed resources, total utilitarians (and average ones, in fact) would be in favour of killing the least happy members of society to let them be replaced with happier ones, so far as this is possible (and if they designed a government, they would do their upmost to ensure this is possible). In fact, they’d want to replace them with happier people who don’t mind being killed or having their friends killed, as that makes it easier to iterate the process.
Also, total utilitarians (but not average ones) would be in favour of killing the least efficient members of society (in terms of transforming resources into happiness) to let them be replaced with more efficient ones.
Now, practical considerations may preclude being able to do this. But a genuine total utilitarian must be filled with a burning wish, if only it were possible, to kill off so many people and replace them in this ideal way. If only there were a way...
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
must be filled with a burning wish
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.
Well, if you gave birth to someone happier than the person you killed, then you’re not as good as the non-killing-birthers, but you’re certainly better than non-killing-non-birthers, and certainly need to be complimented for being better than them… Or alternatly, the non-killing-non-birthers should be told to look up to you. Or serial killers reluctant to reproduce should be offered a free kill in exchange for a few babies.
I think utilitarians should generally stay out of the business of making moral assessments of people as opposed to actions. The action of giving birth to a happier person is (for total utilitarians) a good action. The action of killing the first person is (for total utilitarians) a bad action. If these two actions are (as they would be in just about any actually-credible scenario) totally unrelated, then what a total utilitarian might do is to praise one of the actions and condemn the other; or tell non-killing-non-birthers to emulate one of those actions but not the other.
The last suggestion is an interesting one, in that it does actually describe a nasty-sounding policy that total utilitarians really might endorse. But if we’re going to appeal to intuition here we’d better make sure that we’re not painting an unrealistic picture (which is the sort of thing that enables the Chinese Room argument to fool some people).
For the nasty-sounding policy actually to be approved by a total utilitarian in a given case, we need to find someone who very much wants to kill people but can successfully be prevented from doing so; who could, if s/he so chose, produce children who would bring something like as much net happiness to the world as the killings remove; who currently chooses not to produce such children but would be willing to do so in exchange for being allowed to kill; and there would need not to be other people capable of producing such children at a substantially lower cost to society. Just about every part of this is (I think) very implausible.
It may be that there are weird possible worlds in which those things happen, in which case indeed a total utilitarian might endorse the policy. But “it is possibly to imagine really weird possible worlds in which this ethical system leads to conclusions that we, living in the quite different actual world, find strange” is not a very strong criticism of an ethical system. I think in fact such criticisms can be applied to just about any ethical system.
I think the best way to do this is to “naturalize” all the events involved. Instead of having someone kill or create someone else, imagine the events happened purely because of natural forces.
As it happens, in the case of killing and replacing a person, my intuitions remain the same. If someone is struck by lightning, and a new person pops out of a rock to replace them, my sense is that, on the net, a bad thing has happened, even if the new person has a somewhat better life than the first person. It would have been better if the first person hadn’t been struck by lightning, even if the only way to stop that from happening would also stop the rock from creating the new person.
Unless the new person’s life is a lot better, I think most total utilitarians would and should agree with you. Much of the utility associated with a person’s life happens in other people’s lives. If you get struck by lightning, others might lose a spouse, a parent, a child, a friend, a colleague, a teacher, etc. Some things that have been started might never be finished. For this + replacement to be a good thing just on account of your replacement’s better life, the replacement’s life would need to be sufficiently better than yours to outweigh all those things. I would in general expect that to be hard.
Obviously the further we get away from familiar experiences the less reliable our intuitions are. But I think my intuition remains the same, even if the person in question is a hermit in some wilderness somewhere.
How about a more reasonable scenario then: for fixed resources, total utilitarians (and average ones, in fact) would be in favour of killing the least happy members of society to let them be replaced with happier ones, so far as this is possible (and if they designed a government, they would do their upmost to ensure this is possible). In fact, they’d want to replace them with happier people who don’t mind being killed or having their friends killed, as that makes it easier to iterate the process.
Also, total utilitarians (but not average ones) would be in favour of killing the least efficient members of society (in terms of transforming resources into happiness) to let them be replaced with more efficient ones.
Now, practical considerations may preclude being able to do this. But a genuine total utilitarian must be filled with a burning wish, if only it were possible, to kill off so many people and replace them in this ideal way. If only there were a way...
(Just FYI, over the course of this discussion I have been gradually updating downward my confidence that you’re interested in being accurate and fair about total utilitarians, rather than merely slinging mud.)
So, for genuinely fixed resources, a total utilitarian would consider it a win to kill someone and replace them with someone else if that were a net utility gain. For this it doesn’t suffice for the someone-else to be happier (even assuming for the moment that utility = happiness, which needn’t be quite right); you also have to consider their impact on others, and the impact on the whole society of all that killing-and-replacing.
In particular, e.g., if the result of such a policy were that everyone was living in constant fear that they would be killed and replaced with someone happier, or forced to pretend to be much happier than they really were, then a consistent total utilitarian would likely oppose the policy.
Note also that although you say “killing X, to let them be replaced with Y”, all a total utilitarian would actually be required to approve of is killing X and actually replacing them with Y. The scenario I suppose you need to imagine here is that we have machines for manufacturing fully-grown people, and they’ve gradually been getting better so that they produce better and happier and nicer and more productive people.
Er, no.
Also: it’s only “practical considerations” that would produce the kind of situation you describe, one of fixed total resources.
I admit I have been using deliberately emotive descriptions, as I believe that total utilitarians have gradually disconnected themselves from the true consequences of their beliefs—the equivalent of those who argue that “maybe the world isn’t worth saving” while never dreaming of letting people they know or even random strangers just die in front of them.
Of course! But a true total utilitarian would therefore want to mould society (if they could) so that killing-and-replacing have less negative impact.
In a future where uploads and copying may be possible, this may not be so far fetched as it seems (and total resources are likely limited). That’s the only reason I care about this—there could be situations created in the medium future where the problematic aspects of total utilitarianism come to the fore. I’m not sure we can over-rely on practical considerations to keep these conclusions at bay.