… we do face the ‘further question’ of how much moral value to assign …
Yes, and I did not even attempt to address that ‘further question’ because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.
But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question “What is the value of item X?”. The dissolution begins by requesting the clarifications “Value to whom?” and “Valuable in what context?” So, armed with this analogy, I would ask some questions:
Moral value to whom? Moral value in what context?
If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)
Yes, and I did not even attempt to address that ‘further question’ because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.
But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question “What is the value of item X?”. The dissolution begins by requesting the clarifications “Value to whom?” and “Valuable in what context?” So, armed with this analogy, I would ask some questions:
Moral value to whom? Moral value in what context?
If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
My example was one where (i) the ‘whom’ and the ‘context’ are clear and yet (ii) this obviously doesn’t dissolve the problem.
It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)