That’s a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don’t you value your life? Why are you so willing to assume that super intelligence can’t think of any better solutions than you can?
In principle, I’m willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)
Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.
I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn’t be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn’t care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.
I’m sorry, but I’m going to have to call bollocks on this. Jesus Christ, don’t you want to live? Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Well, I probably won’t end up friendless tomorrow; and most of the mechanisms by which that could happen would not prohibit me from “opting for euthanasia”.
You probably won’t end up friendless in the event of a recovery from cryo storage. There is no reason you couldn’t chose to opt for euthanasia then either.
If we modify the case so the FAI isn’t autonomously creating the person, but rather waking me up and quizzing me on what I want em to be like, a) I really doubt I could do that in a timely fashion, and b) I think the creepiness might prevent me from wanting to do it at all.
That’s a worst case scenario. Even if necessary, are you willing to die so as to avoid a little creeeeeeeeeeepiness? Honestly, don’t you value your life? Why are you so willing to assume that super intelligence can’t think of any better solutions than you can?
In principle, I’m willing to die to prevent the unethical creation of a person. (I might not act in accordance with this principle if I were presented with a very immediate threat to my survival, which I could avert by unethically creating a person; but the threats here are not immediate enough to cause me to so compromise my ethics.)
Why would the creation of such a person be unethical? Eir life would be worth living, and ey would make you happy as well. Human instincts around creepiness are not good metrics when discussing morality.
I think that people should be created by other persons who are motivated, at least in part, by an expectation to intrinsically value the person so created. If a FAI created a person for the express purpose of being my friend, it would presumably expect to value the person intrinsically, but that wouldn’t be its motivation in creating the person; its motivation in creating the person would have to do with valuing me. And if it modified its motivations to avoid annoying me in this way before it created the person, that would probably have other consequences on its actions that I wouldn’t care for, like motivating it to go around creating lots of persons left and right because people are just so darned intrinsically valuable and more are needed.
I’m sorry, but I’m going to have to call bollocks on this. Jesus Christ, don’t you want to live? Why aren’t you currently opting for euthanasia on the risk you end up friendless tomorrow?
Well, I probably won’t end up friendless tomorrow; and most of the mechanisms by which that could happen would not prohibit me from “opting for euthanasia”.
You probably won’t end up friendless in the event of a recovery from cryo storage. There is no reason you couldn’t chose to opt for euthanasia then either.
But in this case, it would be you that creates the person, with purpose of intrinsically valuing em, and the FAI is just a tool you use to do it.
If we modify the case so the FAI isn’t autonomously creating the person, but rather waking me up and quizzing me on what I want em to be like, a) I really doubt I could do that in a timely fashion, and b) I think the creepiness might prevent me from wanting to do it at all.