This is tricky because day old infants typically have adult humans (“parents”) who care very strongly about them. You want me to ignore that, though, and assume that for some reason no adult will care about this infant dying? I think infants probably don’t have moral value in and of themselves, and it doesn’t horrify me that there have been cultures where infanticide/exposure was common and accepted. Other things being equal I think we should be on the safe side and not kill infants, and I wouldn’t advocate legalizing infanticide in the US, though.
P(a randomly chosen sperm will result into an adult human) << P(a randomly chosen ovum will result into an adult human) << P(a randomly chosen baby will result into an adult human). Only one of these probabilities sounds large enough for the word “expect” to be warranted IMO.
P (there will be a child who will grow up to be an adult in the next couple years if you decide to conceive one) is for many about the same as P (a randomly chosen baby will grow up to be an adult)
In each case you can take an action with the expected result of a human with moral value, so jkaufman’s argument should apply either way. The opportunity cost difference is low.
Let’s say you have a machine that, with absolute certainty, will create an adult human whose life is not worth living, but who would not agree to suicide. Or that is only barely worth living, if you lean towards average utilitarianism.
It currently only has the DNA.
Would you turn it off?
How about if it’s already a fetus? A baby? Somewhere along the line, does the actual current state start to matter, and if so where?
That highlights certain conflicts among my moral intuitions I hadn’t noticed before.
All in all, I think I would turn the machine off, unless the resulting person was going to live in an underpopulated country, or I know that the DNA is taken from parents with unusually high IQ and/or other desirable genetically inheritable traits.
The way I put it, it creates an adult human with absolute certainty. There may or may not be an actual, physical test tube involved; it could be an chessmaster AI, or whatnot. The implementation shouldn’t matter. For completeness, assume it’ll be an adult human who’ll live forever, so the implementation becomes an evanescent fraction.
The intended exception is that you can turn it off (destroying the (potential-)human at that stage), any time from DNA string to adult. There are, of course, no legal consequences or whatnot; steel-man as appropriate.
Given that, in what time period—if any—is turning it off okay?
Personally, I’ll go with “Up until the brain starts developing, then gradually less okay, based on uncertainty about brain development as well as actual differences in value.” I care very little about potential people.
I don’t understand. Take just the two gametes which ended up combining into the infant, t-10 months before the scenario in which your “expect them to at some point have moral value, so maximizing over all time means we should include them” applies.
Why doesn’t it apply to the two gametes, why does it apply 10 months later? Is it because the pregnancy is such a big investment? What about if the woman finds the pregnancy utilon-neutral, would the argument translate then?
Let’s go up a step. I’m some kind of total utilitarian, which means maximizing over all creatures of moral worth over all time. I don’t think gametes have moral worth in and of themselves, and very small children probably don’t either, but both do have the potential to grow into creatures that can have positive or negative lives. The goal is, in the long term, to have as many such creatures as possible having the best lives possible.
While “using all your gametes” isn’t possible, most people could reproduce much more than they currently do. Parenting takes a lot of time, however, and with time being a limited resource there’s lots of other things you can do with time. Many of these have a much larger effect on improving welfare or increasing the all-time number of people than raising children. It’s also not clear whether a higher or lower rate of human childbirth is ideal in the long term.
There’s a difference between an infant, which is already a living, breathing human being & the sperm that are expunged in masturbation. Even if those sperm could have been used to produce more humans, there’s no way to prove whether or not they would actually. The woman could fail to conceive, for example. If you wanted to make a law against masturbation, you’d also run into the problem that there is no victim, just the probability of someone that might have existed at some point maybe. I also see a conflict here with autonomy. Can we require people to turn all of their sperm into humans? They didn’t choose to produce sperm; it is an accident of biology. On the other hand, people do choose to have children (usually); it requires conscious choice & effort (excluding certain exceptions like the female rape victim).
There’s not enough information for me to give an answer. Will there be external negative consequences to me (from the law or from the infants’ parents)? Do the infants’ parents want them, or are these infants donated to the Center for Moral Dilemmas by parents who don’t mind if something happens to them? Is the person I’m saving a stranger, or someone I know?
Will you judge me for the answer I provide? Will someone else do so? Will a potential future employer look this up? Will I, by answering this, slightly alter internal ethical injunctions against murdering children that, frankly, are there for a good reason?
When you bring up things like the law, you’re breaking thought experiments, and dodging the real interesting question someone is trying to ask. The obvious intent of the question is to weigh how much you care about one day old infants vs how much you care about mentally healthy adults. Whoever posed the experiment can clarify and add things like “Assume it’s legal,” or “Assume you won’t get caught.” But if you force them to, you are wasting their time. And especially over an online forum, there is no incentive to, because if they do, you might just respond with other similar dodges such as “I don’t want to kill enough that it becomes a significant fraction of the species,” or “By killing, I would damage my inhibitions against killing babies which I want to preserve,” or “I don’t want to create grieving parents,” or “If people find out someone is killing babies, they will take costly countermeasures to protect their babies, which will cause harm across society.”
If you don’t want to answer without these objections out of the way, bring up the obvious fix to the thought experiment in your first answer, like “Assuming it was legal/ I wouldn’t caught, then I would kill N babies,” or “I wouldn’t kill any babies even if it was legal and I wouldn’t get caught, because I value babies so much,” and then explain the difference which is important to you between babies and chickens, because that’s obviously what Locaha was driving at.
I wouldn’t dismiss those that quickly. The more unrealistic assumptions you make, the less the answer to the dilemma in the thought experiment will be relevant to any decision I’ll ever have to make in the real world.
Yes it’s less relevant to that, but the thought experiment isn’t intended to directly glean information about what you’d do in the real world, it’s supposed to gain information about the processes that decide what you would do in the real world. Once enough of this information is gained, it can be used to predict what you’d do in the real world, and also to identify real-world situations where your behavior is determined by your ignorance of facts in the real world, or otherwise deviating from your goals, and in doing so perhaps change that behavior.
How many one day old infants would you be willing to kill to save a mentally healthy adult?
This is tricky because day old infants typically have adult humans (“parents”) who care very strongly about them. You want me to ignore that, though, and assume that for some reason no adult will care about this infant dying? I think infants probably don’t have moral value in and of themselves, and it doesn’t horrify me that there have been cultures where infanticide/exposure was common and accepted. Other things being equal I think we should be on the safe side and not kill infants, and I wouldn’t advocate legalizing infanticide in the US, though.
(Killing infants is also bad because we expect them to at some point have moral value, so maximizing over all time means we should include them.)
(Why doesn’t this argument also apply to using all of your gametes?)
P(a randomly chosen sperm will result into an adult human) << P(a randomly chosen ovum will result into an adult human) << P(a randomly chosen baby will result into an adult human). Only one of these probabilities sounds large enough for the word “expect” to be warranted IMO.
P (there will be a child who will grow up to be an adult in the next couple years if you decide to conceive one) is for many about the same as P (a randomly chosen baby will grow up to be an adult)
In each case you can take an action with the expected result of a human with moral value, so jkaufman’s argument should apply either way. The opportunity cost difference is low.
Steel-man the argument.
Let’s say you have a machine that, with absolute certainty, will create an adult human whose life is not worth living, but who would not agree to suicide. Or that is only barely worth living, if you lean towards average utilitarianism.
It currently only has the DNA.
Would you turn it off?
How about if it’s already a fetus? A baby? Somewhere along the line, does the actual current state start to matter, and if so where?
...oh.
That highlights certain conflicts among my moral intuitions I hadn’t noticed before.
All in all, I think I would turn the machine off, unless the resulting person was going to live in an underpopulated country, or I know that the DNA is taken from parents with unusually high IQ and/or other desirable genetically inheritable traits.
The machine incubates humans until they are the equivalent of 3 months old (the famed 4th trimester).
Would you turn it off at all stages?
(Not saying you misread me, but:)
The way I put it, it creates an adult human with absolute certainty. There may or may not be an actual, physical test tube involved; it could be an chessmaster AI, or whatnot. The implementation shouldn’t matter. For completeness, assume it’ll be an adult human who’ll live forever, so the implementation becomes an evanescent fraction.
The intended exception is that you can turn it off (destroying the (potential-)human at that stage), any time from DNA string to adult. There are, of course, no legal consequences or whatnot; steel-man as appropriate.
Given that, in what time period—if any—is turning it off okay?
Personally, I’ll go with “Up until the brain starts developing, then gradually less okay, based on uncertainty about brain development as well as actual differences in value.” I care very little about potential people.
I don’t know.
I’m so glad that I don’t live in the Least Convenient Possible World so I don’t have to make such a choice.
(Opportunity cost.)
I don’t understand. Take just the two gametes which ended up combining into the infant, t-10 months before the scenario in which your “expect them to at some point have moral value, so maximizing over all time means we should include them” applies.
Why doesn’t it apply to the two gametes, why does it apply 10 months later? Is it because the pregnancy is such a big investment? What about if the woman finds the pregnancy utilon-neutral, would the argument translate then?
Let’s go up a step. I’m some kind of total utilitarian, which means maximizing over all creatures of moral worth over all time. I don’t think gametes have moral worth in and of themselves, and very small children probably don’t either, but both do have the potential to grow into creatures that can have positive or negative lives. The goal is, in the long term, to have as many such creatures as possible having the best lives possible.
While “using all your gametes” isn’t possible, most people could reproduce much more than they currently do. Parenting takes a lot of time, however, and with time being a limited resource there’s lots of other things you can do with time. Many of these have a much larger effect on improving welfare or increasing the all-time number of people than raising children. It’s also not clear whether a higher or lower rate of human childbirth is ideal in the long term.
There’s a difference between an infant, which is already a living, breathing human being & the sperm that are expunged in masturbation. Even if those sperm could have been used to produce more humans, there’s no way to prove whether or not they would actually. The woman could fail to conceive, for example. If you wanted to make a law against masturbation, you’d also run into the problem that there is no victim, just the probability of someone that might have existed at some point maybe. I also see a conflict here with autonomy. Can we require people to turn all of their sperm into humans? They didn’t choose to produce sperm; it is an accident of biology. On the other hand, people do choose to have children (usually); it requires conscious choice & effort (excluding certain exceptions like the female rape victim).
There’s not enough information for me to give an answer. Will there be external negative consequences to me (from the law or from the infants’ parents)? Do the infants’ parents want them, or are these infants donated to the Center for Moral Dilemmas by parents who don’t mind if something happens to them? Is the person I’m saving a stranger, or someone I know?
I’ll add in:
Will you judge me for the answer I provide? Will someone else do so? Will a potential future employer look this up? Will I, by answering this, slightly alter internal ethical injunctions against murdering children that, frankly, are there for a good reason?
None. You get thrown in jail or put to death for that kind of thing.
See The Least Convenient Possible World, Better Disagreement.
When you bring up things like the law, you’re breaking thought experiments, and dodging the real interesting question someone is trying to ask. The obvious intent of the question is to weigh how much you care about one day old infants vs how much you care about mentally healthy adults. Whoever posed the experiment can clarify and add things like “Assume it’s legal,” or “Assume you won’t get caught.” But if you force them to, you are wasting their time. And especially over an online forum, there is no incentive to, because if they do, you might just respond with other similar dodges such as “I don’t want to kill enough that it becomes a significant fraction of the species,” or “By killing, I would damage my inhibitions against killing babies which I want to preserve,” or “I don’t want to create grieving parents,” or “If people find out someone is killing babies, they will take costly countermeasures to protect their babies, which will cause harm across society.”
If you don’t want to answer without these objections out of the way, bring up the obvious fix to the thought experiment in your first answer, like “Assuming it was legal/ I wouldn’t caught, then I would kill N babies,” or “I wouldn’t kill any babies even if it was legal and I wouldn’t get caught, because I value babies so much,” and then explain the difference which is important to you between babies and chickens, because that’s obviously what Locaha was driving at.
I wouldn’t dismiss those that quickly. The more unrealistic assumptions you make, the less the answer to the dilemma in the thought experiment will be relevant to any decision I’ll ever have to make in the real world.
Yes it’s less relevant to that, but the thought experiment isn’t intended to directly glean information about what you’d do in the real world, it’s supposed to gain information about the processes that decide what you would do in the real world. Once enough of this information is gained, it can be used to predict what you’d do in the real world, and also to identify real-world situations where your behavior is determined by your ignorance of facts in the real world, or otherwise deviating from your goals, and in doing so perhaps change that behavior.