Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)
Why is it distinct? Whether doing something is an error determines if it’s beneficial to obtain ability and willingness to do it.
It’s distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn’t impact the ethical calculation as it would if we were sending a person.
(I think that’s what JoshuaZ was getting at. The “distinct question” would presumably be that of the AI’s potential personhood.)