‘bringing a person capable of consenting into existence without their consent is always immoral’
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...
That’s a hell of a thing to take as axiomatic. Taken one way, it seems to define birth as immoral; taken another, it allows the creation of potentially sapient self-organizing systems with arbitrary properties as long as they start out subsapient, which I doubt is what you’re looking for.
Neither of those people are capable of consenting or refusing consent to being brought into being.
The axiom, by the way, is “Interactions between sentient beings should be mutually consensual.”
I guess we’re looking at interpretation 2, then. The main problem I see with that is that for most sapient systems, it’s possible to imagine a subsapient system capable of organizing itself into a similar class of being, and it doesn’t seem especially consistent for a set of morals to prohibit creating the former outright and remain silent on the latter.
Imagine for example a sapient missile guidance system. Your moral framework seems to prohibit creating such a thing outright, which I can see reasoning for—but it doesn’t seem to prohibit creating a slightly nerfed version of the same software that predictably becomes sapient once certain criteria are met. If you’d say that’s tantamount to creating a sapient being, then fine—but I don’t see any obvious difference in kind between that and creating a human child, aside from predicted use.
What’s wrong with creating a sapient missile guidance system? What’s the advantage of a sapient guidance system over a mere computer?
Given the existence of a sapient missile, it becomes impermissible to launch that missile without the consent of the missile. Just like it is impermissible to launch a spaceship without the permission of a human pilot...