Is it possible to create some rule like this? Yeah, sure.
The problem is that you have to explain why that rule is valid.
If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it’s not clear why since their phenomenal pain is identical.
The problem is that you have to explain why that rule is valid.
It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).
These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.
What about a similar gradual rule for varying sentience levels of animal?
A quantitative measure of sentience seems much more reasonable than a binary measure. I’m not a biologist, though, and so don’t have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from ‘doesn’t have a central nervous system’ to ‘beyond humans’ to be possible, but don’t know if there are bands that aren’t occupied for various practical reasons.
While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don’t want to let absolutely everyone use their own discretion in difficult situations.
Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that’s actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.
Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
If you believe sole justification for a moral proposition is that you think it’s intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?
I’m a moral anti-realist. I don’t think there’s a “true objective” ethics out there written into the fabric of the Universe for us to discover.
That doesn’t mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.
Nobody is ever “right” or “wrong” about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.
Is it possible to create some rule like this? Yeah, sure.
The problem is that you have to explain why that rule is valid.
If two babies are being tortured and one will die tomorrow but the other grows into an adult, your rule would claim that we should only stop one torture, and it’s not clear why since their phenomenal pain is identical.
It comes from valuing future world trajectories, rather than just valuing the present. I see a small difference between killing a fetus before delivery and an infant after delivery, and the difference I see is roughly proportional to the amount of time between the two (and the probability that the fetus will survive to become the infant).
These sorts of gradual rules seem to me far more defensible than sharp gradations, because the sharpness in the rule rarely corresponds to a sharpness in reality.
What about a similar gradual rule for varying sentience levels of animal?
A quantitative measure of sentience seems much more reasonable than a binary measure. I’m not a biologist, though, and so don’t have a good sense of how sharp the gradations of sentience in animals are; I would naively expect basically every level of sentience from ‘doesn’t have a central nervous system’ to ‘beyond humans’ to be possible, but don’t know if there are bands that aren’t occupied for various practical reasons.
I don’t think anyone is advocating a binary system. No one is supporting voting rights for pigs, for example.
While sliding scales may more accurately represent reality, sharp gradations are the only way we can come up with a consistent policy. Abortion especially is a case where we need a bright line. The fact that we have two different words (abortion and infanticide) for what amounts to a difference of a couple of hours is very significant. We don’t want to let absolutely everyone use their own discretion in difficult situations.
Most policy arguments are about where to draw the bright line, not about whether we should adopt a sliding scale instead, and I think that’s actually a good idea. Admitting that most moral questions fall under a gray area is more likely to give your opponent ammunition to twist your moral views than it is to make your own judgment more accurate.
Some people value the future-potential of things and even give them moral value in cases when the present-time precursor or cause clearly has no moral status of its own. This corresponds to many people’s moral intuitions, and so they don’t need to explain why this is valid.
If you believe sole justification for a moral proposition is that you think it’s intuitively correct, then no one is ever wrong, and these types of articles are rather pointless, no?
I’m a moral anti-realist. I don’t think there’s a “true objective” ethics out there written into the fabric of the Universe for us to discover.
That doesn’t mean there is no such thing as morals, or that debating them is pointless. Morals are part of what we are and we perceive them as moral intuitions. Because we (humans) are very similar to one another, our moral intuition are also fairly similar, and so it makes sense to discuss morals, because we can influence one another, change our minds, better understand each other, and come to agreement or trade values.
Nobody is ever “right” or “wrong” about morals. You can only be right or wrong about questions of fact, and the only factual, empirical thing about morals is what moral intuitions some particular person has at a point in time.
If we can only stop one, sure. If we could stop both, why not do so?
If Alice bets $10,000 against $1 on heads and Bob bets $10,000 against $1 on tails, they’re both idiots, even though only one of them will lose.