But why need it sort its own preferences the same way humans do?
That is what I (thought I) was explaining in the following paragraphs. Once it a) knows what humans want, and b) desires acting in a way that matches that preference ranking, it must carve out a portion of the world’s ontology that excludes itself from being a recipient of that service.
It’s not that the Elf would necessarily want to be served like it serves others (although that is a failure mode too); it’s that the Elf would resemble a human well enough at that point that we would have to conclude that it’s wrong to treat it as a servant. The fact that it was made to enjoy it is no longer a defense, for the same reason it’s not a defense to say, “but I’ve already psychologically abused him/her enough that he/she enjoys this abuse!”
What seems to underlie this argument is an idea that no cognitive system can understand a human’s values well enough to predict its preferences without sharing those values
That’s not my premise. My premise is (simplifying a bit) that it’s the decision mechanism of a being that primarily determines its moral worth. From this it follows that beings adhering to decision mechanisms of similar enough depth and with similar enough values to humans ought to be regarded as human.
For that reason, I see a tradeoff between effectiveness at replicating humans vs. moral worth. You can make a perfect human replica, but at the cost of obligating yourself to treat it as having the rights of a human. See EY’s discussion of these issues in Nonperson predicates and Can’t Unbirth a Child.
An alien race could indeed model humans well enough to predict us—but at that point they would have to be regarded as being of similar moral worth to us (modulo any dissonance between our values).
That is what I (thought I) was explaining in the following paragraphs. Once it a) knows what humans want, and b) desires acting in a way that matches that preference ranking, it must carve out a portion of the world’s ontology that excludes itself from being a recipient of that service.
It’s not that the Elf would necessarily want to be served like it serves others (although that is a failure mode too); it’s that the Elf would resemble a human well enough at that point that we would have to conclude that it’s wrong to treat it as a servant. The fact that it was made to enjoy it is no longer a defense, for the same reason it’s not a defense to say, “but I’ve already psychologically abused him/her enough that he/she enjoys this abuse!”
That’s not my premise. My premise is (simplifying a bit) that it’s the decision mechanism of a being that primarily determines its moral worth. From this it follows that beings adhering to decision mechanisms of similar enough depth and with similar enough values to humans ought to be regarded as human.
For that reason, I see a tradeoff between effectiveness at replicating humans vs. moral worth. You can make a perfect human replica, but at the cost of obligating yourself to treat it as having the rights of a human. See EY’s discussion of these issues in Nonperson predicates and Can’t Unbirth a Child.
An alien race could indeed model humans well enough to predict us—but at that point they would have to be regarded as being of similar moral worth to us (modulo any dissonance between our values).
OK, I think I understand you now. Thanks for clarifying.