It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)