P(a randomly chosen sperm will result into an adult human) << P(a randomly chosen ovum will result into an adult human) << P(a randomly chosen baby will result into an adult human). Only one of these probabilities sounds large enough for the word “expect” to be warranted IMO.
P (there will be a child who will grow up to be an adult in the next couple years if you decide to conceive one) is for many about the same as P (a randomly chosen baby will grow up to be an adult)
In each case you can take an action with the expected result of a human with moral value, so jkaufman’s argument should apply either way. The opportunity cost difference is low.
Let’s say you have a machine that, with absolute certainty, will create an adult human whose life is not worth living, but who would not agree to suicide. Or that is only barely worth living, if you lean towards average utilitarianism.
It currently only has the DNA.
Would you turn it off?
How about if it’s already a fetus? A baby? Somewhere along the line, does the actual current state start to matter, and if so where?
That highlights certain conflicts among my moral intuitions I hadn’t noticed before.
All in all, I think I would turn the machine off, unless the resulting person was going to live in an underpopulated country, or I know that the DNA is taken from parents with unusually high IQ and/or other desirable genetically inheritable traits.
The way I put it, it creates an adult human with absolute certainty. There may or may not be an actual, physical test tube involved; it could be an chessmaster AI, or whatnot. The implementation shouldn’t matter. For completeness, assume it’ll be an adult human who’ll live forever, so the implementation becomes an evanescent fraction.
The intended exception is that you can turn it off (destroying the (potential-)human at that stage), any time from DNA string to adult. There are, of course, no legal consequences or whatnot; steel-man as appropriate.
Given that, in what time period—if any—is turning it off okay?
Personally, I’ll go with “Up until the brain starts developing, then gradually less okay, based on uncertainty about brain development as well as actual differences in value.” I care very little about potential people.
P(a randomly chosen sperm will result into an adult human) << P(a randomly chosen ovum will result into an adult human) << P(a randomly chosen baby will result into an adult human). Only one of these probabilities sounds large enough for the word “expect” to be warranted IMO.
P (there will be a child who will grow up to be an adult in the next couple years if you decide to conceive one) is for many about the same as P (a randomly chosen baby will grow up to be an adult)
In each case you can take an action with the expected result of a human with moral value, so jkaufman’s argument should apply either way. The opportunity cost difference is low.
Steel-man the argument.
Let’s say you have a machine that, with absolute certainty, will create an adult human whose life is not worth living, but who would not agree to suicide. Or that is only barely worth living, if you lean towards average utilitarianism.
It currently only has the DNA.
Would you turn it off?
How about if it’s already a fetus? A baby? Somewhere along the line, does the actual current state start to matter, and if so where?
...oh.
That highlights certain conflicts among my moral intuitions I hadn’t noticed before.
All in all, I think I would turn the machine off, unless the resulting person was going to live in an underpopulated country, or I know that the DNA is taken from parents with unusually high IQ and/or other desirable genetically inheritable traits.
The machine incubates humans until they are the equivalent of 3 months old (the famed 4th trimester).
Would you turn it off at all stages?
(Not saying you misread me, but:)
The way I put it, it creates an adult human with absolute certainty. There may or may not be an actual, physical test tube involved; it could be an chessmaster AI, or whatnot. The implementation shouldn’t matter. For completeness, assume it’ll be an adult human who’ll live forever, so the implementation becomes an evanescent fraction.
The intended exception is that you can turn it off (destroying the (potential-)human at that stage), any time from DNA string to adult. There are, of course, no legal consequences or whatnot; steel-man as appropriate.
Given that, in what time period—if any—is turning it off okay?
Personally, I’ll go with “Up until the brain starts developing, then gradually less okay, based on uncertainty about brain development as well as actual differences in value.” I care very little about potential people.
I don’t know.
I’m so glad that I don’t live in the Least Convenient Possible World so I don’t have to make such a choice.