It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)
My example was one where (i) the ‘whom’ and the ‘context’ are clear and yet (ii) this obviously doesn’t dissolve the problem.
It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)