Because you’re using “it’s fine to arbitrarily prioritize humans morally” as the justification for this privilege. At least that’s how I’m understanding you.
I think it’s fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I’m not sure I can give a better feel for it than “I’ll know it when I see it”, and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth.
I’m sure there are also humans that you cannot possibly coexist with.
Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.
I think it’s fine for now absent a more precise definition of what we consider human-like values and worth, which we obviously do not understand well enough to narrow down. I think the category is somewhat broader than humans, but I’m not sure I can give a better feel for it than “I’ll know it when I see it”, and that very ignorance to me seems an excellent reason to not start gallivanting with creating other potentially sentient entities of questionable moral worth.
Not many of them, and usually they indeed end up in jail or on the gallows because of their antisocial tendencies.
Let me suggest a candidate larger fuzzy class:
”sapiences that are (primarily) the result of Darwinian evolution, and have not had their evolved priorities and drives significantly adjusted (for example into alignment with something else)”
This would include any sufficiently accurate whole-brain emulation of a human, as long as they hadn’t been heavily modified, especially in their motivations and drives. It’s intended to be a matter of degree, rather than a binary classification. I haven’t defined ‘sapience’, but I’m using it in a sense in which Homo sapiens is the only species currently on Earth that would score highly for it, and one of the criteria for it is that a species being able to support cultural & technological information transfer between generations that is >> its genetic information transfer.
The moral design question then is, supposing we were to suddenly encounter an extraterrestrial sapient species, do we want our AGIs to be on the human side, or on the all evolved intelligences count equally side?
I’d say something in between. Do I want the AGI to just genocide any aliens it meets on the simple basis that they are not human, so they do not matter? No. Do I want the AGI to stay neutral and refrain from helping us or taking sides were we to meet the Thr’ax Hivemind, Eaters of Life and Bane of the Galaxy, because they too are sapient? Also no. I don’t think there’s an easy question to where we draw the line between “we can find a mutual understanding, so we should try” and “it’s clearly us or them, so let’s make sure it’s us”.