While true, I suspect most or all of those people would have a hard time giving a good definition of “person” to an AI in such a way that the definition included babies, adults, and thinking aliens, but not pigs or bonobos. So yes, the claim I am implicitly making with this (or any other) controversial opinion is that I think almost everyone is wrong about this specific topic.
One rough effort at such definition would be: “any post-birth member of a species whose adult members are intelligent and conscious”, where “birth” can be replaced by an analogous Schelling point in the development in an alien species, or by an arbitrary chosen line at a similar stage of development, if no such Schelling point exists.
You might say that this definition is an arbtrary kludge that does not “carve Nature at the joints”. My reply would be that ethics is adapted for humans, and does not need to carve Nature at intrinsic joints but at the places that humans find relevant.
Your point about different rates of development is well taken, however. I am also not an expert in this topic, so we’ll have to let it rest for the moment.
For computers, hardware and software can be separated in a way that is not possible with humans (with current technology). When the separation is possible, I agree personhood should be attributed to the software rather than the hardware, so your machine should not be considered a person. If in the future it becomes routinely possible to scan, duplicate and emulate human minds, then killing a biological human will probably also be less of a crime than it is now, as long as his/her mind is preserved. (Maybe there would be a taboo instead about deleting minds with no backup, even when they are not “running” on hardware).
It is also possible than in such a future where the concept of a person is commonly associated with a mind pattern, legalizing infanticide before brain development seats in would be acceptable. So perhaps we are not in disagreement after all, since on a different subthread you have said you do not really support legalization of infanticide in our current society.
I still think there is a bit of a meta diagreement: you seem to think that the laws and morality of this hypothetical future society would be better than our current ones, while I see it as a change in what are the appropriate Schelling points for the law to rule, in response to technological changes, without the end point being more “correct” in any absolute sense than our current law.
Oh, of course. I’ve taken it that you were asking about a case where such software had indeed been installed on the machine. The potential of personhood on its own seems hardly worth anything to me.
One rough effort at such definition would be: “any post-birth member of a species whose adult members are intelligent and conscious”, where “birth” can be replaced by an analogous Schelling point in the development in an alien species, or by an arbitrary chosen line at a similar stage of development, if no such Schelling point exists.
You might say that this definition is an arbtrary kludge that does not “carve Nature at the joints”. My reply would be that ethics is adapted for humans, and does not need to carve Nature at intrinsic joints but at the places that humans find relevant.
Your point about different rates of development is well taken, however. I am also not an expert in this topic, so we’ll have to let it rest for the moment.
For computers, hardware and software can be separated in a way that is not possible with humans (with current technology). When the separation is possible, I agree personhood should be attributed to the software rather than the hardware, so your machine should not be considered a person. If in the future it becomes routinely possible to scan, duplicate and emulate human minds, then killing a biological human will probably also be less of a crime than it is now, as long as his/her mind is preserved. (Maybe there would be a taboo instead about deleting minds with no backup, even when they are not “running” on hardware).
It is also possible than in such a future where the concept of a person is commonly associated with a mind pattern, legalizing infanticide before brain development seats in would be acceptable. So perhaps we are not in disagreement after all, since on a different subthread you have said you do not really support legalization of infanticide in our current society.
I still think there is a bit of a meta diagreement: you seem to think that the laws and morality of this hypothetical future society would be better than our current ones, while I see it as a change in what are the appropriate Schelling points for the law to rule, in response to technological changes, without the end point being more “correct” in any absolute sense than our current law.
Well, yes. This seems obvious to me.
Oh, of course. I’ve taken it that you were asking about a case where such software had indeed been installed on the machine. The potential of personhood on its own seems hardly worth anything to me.