Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can’t think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they’re rationally morally identical.
Of course, if you’re using a naturalist based intuitionist approach to morality, then you can recognize that it’s illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you’re built. This is roughly what I believe, and why I don’t push very hard for large population increases.
I think perhaps that ‘Killing is bad’ might be a better phrasing.
I would be more specific, and say that ‘killing someone without their consent is always immoral’ as well as ‘bringing a person capable of consenting into existence without their consent is always immoral’. I haven’t figured out how someone who doesn’t exist could grant consent, but it’s there for completeness.
Of course, if you want to play that time travel is killing people, I’ll point out that normal time naturally results in omnicide every plank time, and creation of a new set of people that exist. You’re not killing people, but simply selecting a different set of people that will exist next plank time.
‘bringing a person capable of consenting into existence without their consent is always immoral’
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...
Why do you think that death is bad? Perhaps that would clarify this conversation. I personally can’t think of a reason that death is bad except that it precludes having good experiences in life. Nonexistence does the exact same thing. So I think that they’re rationally morally identical.
Of course, if you’re using a naturalist based intuitionist approach to morality, then you can recognize that it’s illogical that you value existing persons more than potential ones and yet still accept that those existing people really do have greater moral weight, simply because of the way you’re built. This is roughly what I believe, and why I don’t push very hard for large population increases.
I think perhaps that ‘Killing is bad’ might be a better phrasing.
I would be more specific, and say that ‘killing someone without their consent is always immoral’ as well as ‘bringing a person capable of consenting into existence without their consent is always immoral’. I haven’t figured out how someone who doesn’t exist could grant consent, but it’s there for completeness.
Of course, if you want to play that time travel is killing people, I’ll point out that normal time naturally results in omnicide every plank time, and creation of a new set of people that exist. You’re not killing people, but simply selecting a different set of people that will exist next plank time.
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
Neither of those people are capable of consenting or refusing consent to being brought into being.
The axiom, by the way, is “Interactions between sentient beings should be mutually consensual.”
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...