”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.
Let me suggest a candidate larger fuzzy class:
”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.