None of the above criteria except (in some empirical cases) H imply that human infants or late stage demented people should be given more ethical consideration than cows, pigs or chickens.
This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.
Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)
(Looking at the comments, Manfred makes a similar argument more vividly over here.)
If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn’t because the post is already long and because I didn’t want to make it look like I was just “knocking down a very weak argument or two”. I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn’t considered.
If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well.
And then arguments A through E will not argue for treating the enhanced animals differently from humans.
And the argument from potentiality would also prohibit abortion or experimentation on embryos.
It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.
I was thinking about including the argument from potentiality, but then I didn’t because the post is already long and because I didn’t want to make it look like I was just “knocking down a very weak argument or two”.
I think this is a hazard for any “Arguments against X” post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.
What level of “potential” is required here? A human baby has a certain amount of potential to reach whatever threshold you’re comparing it against—if it’s fed, kept warm, not killed, etc. A pig also has a certain level of potential—if we tweak its genetics.
If we develop AI, then any given pile of sand has just as much potential to reach “human level” as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).
Your proposed category—“can develop to contain morally relevant quantity X”—tends to fail along similar edge cases as whatever morally relevant quality it’s replacing.
What level of “potential” is required here? A human baby has a certain amount of potential to reach whatever threshold you’re comparing it against—if it’s fed, kept warm, not killed, etc. A pig also has a certain level of potential—if we tweak its genetics.
I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.
Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect—even if the nature of their illness means they will never live to see their third birthday. We don’t hold their lack of “potential” against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn’t make them any less sentient.
Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves?
My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday.
But their lack of cognitive sophistication doesn’t make them any less sentient.
Agreed, mostly. (I think it might be meaningful to refer to syntax or math as ‘senses’ in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)
Vanvier, you say that you wouldn’t be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?
What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?
I’m not sure what this would look like, actually. The first thing that comes to mind is Down’s Syndrome, but the impression I get is that that’s a much smaller reduction in cognitive capacity than the one you’re describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down’s, and I suspect that the more extreme the reduction, the easier it would be to come to that direction.
I hope you don’t mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don’t think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.
Historically, we have dismissed very obviously sapient people as lacking moral worth (people with various mental illnesses and disabilities, and even the freaking Deaf). Since babies are going to have whatever-makes-them-people at some point, it may be more likely that they already have it and we don’t notice, rather than they haven’t yet. That’s why I’m a lot iffier about killing babies and mentally disabled humans than pigs.
Speaking as a vegetarian for ethical reasons … yes. That’s not to say they don’t deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.
My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.
It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don’t turn into people.
Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?
Probably, but in such a world, I don’t think human life would be scarce, and I think that the value of human life would plummet accordingly. They would still represent a significant time and capital investment, and so be more valuable than the em case, but I think that people would be seen as much more replaceable.
It is possible that human reproduction is horrible by many moral standards which seem reasonable. I think it’s more convenient to jettison those moral standards than reshape reproduction, but one could imagine a world where people were castrated / had oophorectomies to prevent gamete production, with reproduction done digitally from sequenced genomes. It does not seem obviously worse than our world, except that it seems like a lot of work for minimal benefit.
Is it possible to create some rule like this? Yeah, sure.
The problem is that you have to explain why that rule is valid.
If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it’s not clear why since their phenomenal pain is identical.
The problem is that you have to explain why that rule is valid.
It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).
These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.
What about a similar gradual rule for varying sentience levels of animal?
A quantitative measure of sentience seems much more reasonable than a binary measure. I’m not a biologist, though, and so don’t have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from ‘doesn’t have a central nervous system’ to ‘beyond humans’ to be possible, but don’t know if there are bands that aren’t occupied for various practical reasons.
While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don’t want to let absolutely everyone use their own discretion in difficult situations.
Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that’s actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.
Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
If you believe sole justification for a moral proposition is that you think it’s intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?
I’m a moral anti-realist. I don’t think there’s a “true objective” ethics out there written into the fabric of the Universe for us to discover.
That doesn’t mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.
Nobody is ever “right” or “wrong” about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.
This strikes me as a very impatient assessment. The human infant will turn into a human, and the piglet will turn into a pig, and so down the road A through E will suggest treating them differently.
Similarly, the demented can be given the reverse treatment (though it works differently); they once deserved moral standing, and thus are extended moral standing because the extender can expect that when their time comes, they will be treated by society in about the same way as society treated its elders when they were young. (This mostly falls under B, except the reciprocation is not direct.)
(Looking at the comments, Manfred makes a similar argument more vividly over here.)
If we use cognitive enhancements on animals, we can turn them into highly intelligent, self-aware beings as well. And the argument from potentiality would also prohibit abortion or experimentation on embryos. I was thinking about including the argument from potentiality, but then I didn’t because the post is already long and because I didn’t want to make it look like I was just “knocking down a very weak argument or two”. I should have used a qualifier though in the sentence you quoted, to leave room for things I hadn’t considered.
And then arguments A through E will not argue for treating the enhanced animals differently from humans.
It would make the difference between abortion and infanticide small. It does seem to me that the arguments for allowing abortion but not allowing infanticide are weak and the most convincing one hinges on legal convenience.
I think this is a hazard for any “Arguments against X” post; the reason X is controversial is generally because there are many arguments on both sides, and an argument that seems strong to one person seems weak to another.
What level of “potential” is required here? A human baby has a certain amount of potential to reach whatever threshold you’re comparing it against—if it’s fed, kept warm, not killed, etc. A pig also has a certain level of potential—if we tweak its genetics.
If we develop AI, then any given pile of sand has just as much potential to reach “human level” as an infant. I would be amused if improved engineering knowledge gave beaches moral weight (though not completely opposed to the idea).
Your proposed category—“can develop to contain morally relevant quantity X”—tends to fail along similar edge cases as whatever morally relevant quality it’s replacing.
I have given a gradualist answer to every question related to this topic, and unsurprisingly I will not veer from that here. The value of the potential is proportional to the difficulty involved in realizing that potential, as the value of oil in the ground depends on what lies between you and it.
Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect—even if the nature of their illness means they will never live to see their third birthday. We don’t hold their lack of “potential” against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn’t make them any less sentient.
My intuitions say the former. I would not be averse to a quick end for young human children who are not going to live to see their third birthday.
Agreed, mostly. (I think it might be meaningful to refer to syntax or math as ‘senses’ in the context of subjective experience and I suspect that abstract reasoning and subjective sensation of all emotions, including pain, are negatively correlated. The first weakly points towards valuing their experience less, but the second strongly points towards valuing their experience more.)
Vanvier, you say that you wouldn’t be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig?
I’m not sure what this would look like, actually. The first thing that comes to mind is Down’s Syndrome, but the impression I get is that that’s a much smaller reduction in cognitive capacity than the one you’re describing. The last time I considered that issue, I favored abortion in the presence of a positive amniocentesis test for Down’s, and I suspect that the more extreme the reduction, the easier it would be to come to that direction.
I hope you don’t mind that this answers a different question than the one you asked- I think there are significant (practical, if not also moral) differences between gamete selection, embryo selection, abortion, infanticide, and execution of adults (sorted from easiest to justify to most difficult to justify). I don’t think execution of cognitively impaired adults would be justifiable in the presence of modern American economic constraints on grounds other than danger posed to others.
Historically, we have dismissed very obviously sapient people as lacking moral worth (people with various mental illnesses and disabilities, and even the freaking Deaf). Since babies are going to have whatever-makes-them-people at some point, it may be more likely that they already have it and we don’t notice, rather than they haven’t yet. That’s why I’m a lot iffier about killing babies and mentally disabled humans than pigs.
Speaking as a vegetarian for ethical reasons … yes. That’s not to say they don’t deserve some moral consideration based on raw brainpower/sentience and even a degree of sentimentality, of course, but still.
My sperm has the potential to become human. When I realized almost all of them were dying because of my continued existence, I decided that I will have to kill myself. It was the only rational thing to do.
It seems to me there is a significant difference between requiring an oocyte to become a person and requiring sustenance to become a person. I think about half of zygotes survive the pregnancy process, but almost all sperm don’t turn into people.
Would this difference disappear if we developed the technology to turn millions of sperm cells into babies?
Doesn’t our current cloning technology allow us to turn any ordinary cell into a baby, albeit one with aging-related diseases?
Probably, but in such a world, I don’t think human life would be scarce, and I think that the value of human life would plummet accordingly. They would still represent a significant time and capital investment, and so be more valuable than the em case, but I think that people would be seen as much more replaceable.
It is possible that human reproduction is horrible by many moral standards which seem reasonable. I think it’s more convenient to jettison those moral standards than reshape reproduction, but one could imagine a world where people were castrated / had oophorectomies to prevent gamete production, with reproduction done digitally from sequenced genomes. It does not seem obviously worse than our world, except that it seems like a lot of work for minimal benefit.
Is it possible to create some rule like this? Yeah, sure.
The problem is that you have to explain why that rule is valid.
If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it’s not clear why since their phenomenal pain is identical.
It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).
These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.
What about a similar gradual rule for varying sentience levels of animal?
A quantitative measure of sentience seems much more reasonable than a binary measure. I’m not a biologist, though, and so don’t have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from ‘doesn’t have a central nervous system’ to ‘beyond humans’ to be possible, but don’t know if there are bands that aren’t occupied for various practical reasons.
I don’t think anyone is advocating a binary system. No one is supporting voting rights for pigs, for example.
While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don’t want to let absolutely everyone use their own discretion in difficult situations.
Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that’s actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.
Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
If you believe sole justification for a moral proposition is that you think it’s intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?
I’m a moral anti-realist. I don’t think there’s a “true objective” ethics out there written into the fabric of the Universe for us to discover.
That doesn’t mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.
Nobody is ever “right” or “wrong” about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.
If we can only stop one, sure. If we could stop both, why not do so?
If Alice bets $10,000 against $1 on heads and Bob bets $10,000 against $1 on tails, they’re both idiots, even though only one of them will lose.