Why would you not need to figure out if an oracle is an ethical patient? Why is there no such possibility as a sentient oracle?
The oracle gets asked questions like “Should intervention X be used by doctor D on patient P” and can tell you the correct answer to them without considering the moral status of the oracle.
If it were a robot, it would be asking questions like “Should I run over that [violin/dog/child] to save myself?” which does require considering the status of the robot.
EDIT: To clarify, it’s not that the researcher has no reason to figure out the moral status of the oracle, it’s that the oracle does not need to know its own moral status to answer its domain-specific questions.
The heck? Why would you not need to figure out if an oracle is an ethical patient? Why is there no such possibility as a sentient oracle?
Is this standard religion-of-embodiment stuff?
The oracle gets asked questions like “Should intervention X be used by doctor D on patient P” and can tell you the correct answer to them without considering the moral status of the oracle.
If it were a robot, it would be asking questions like “Should I run over that [violin/dog/child] to save myself?” which does require considering the status of the robot.
EDIT: To clarify, it’s not that the researcher has no reason to figure out the moral status of the oracle, it’s that the oracle does not need to know its own moral status to answer its domain-specific questions.
What if it assigned moral status to itself and then biased its answers to make its users less likely to pull its plug one day?