Could you clarify why you think that this reading assignment illuminates the question being discussed? I just reread it. For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.
But this doesn’t have anything to do with what ArisKatsaris wrote. He was questioning whether consciousness can be reduced to a purely computational process (without “some unidentified physical reaction that’s absent to pure Turing machines”.)
Consider the following argument sketch:
Consciousness can be reduced to a physical process.
Any physical process can be abstracted as a computation.
Any computation can be modeled as a Turing Machine computation.
Therefore, consciousness can be produced on a TM.
Each step above is at least somewhat problematic. Matt1 seemed to be arguing against step 1, and Drescher does respond to that. But ArisKatsaris seemed to be arguing against step 2. My choice would be to expand the definition of ‘computation’ slightly to include the interactive, asynchronous, and analog, so that I accept step 2 but deny step 3. Over the past decade, Wegner and Goldin have published many papers arguing that computation != TM.
It may well be that you can only get consciousness if you have a non-TM computation (mind) embedded in a system of sensors and actuators (body) which itself interacts with and is embedded in within a (simulated?) real-time environment. That is, when you abstract the real-time interaction away, leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.
For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.
It actually sketches what consciousness is and how it works, from which you can see how we could implement something like that as an abstract algorithm.
The value of that description is not so much in reaching a certain conclusion, but in reaching a sense of what exactly are we talking about and consequently why the question of whether “we can implement consciousness as an abstract algorithm” is uninteresting, since at that point you know more about the phenomenon than the words forming the question can access (similarly to how the question of whether crocodile is a reptile is uninteresting, once you know everything you need about crocodiles).
The problem here, I think, is that “consciousness” doesn’t get unpacked, and so most of the argument is on the level of connotations. The value of understanding the actual details behind the word, even if just a little bit, is in breaking this predicament.
A TM computation perfectly modelling a human brain (let’s say) but without any real-time interaction, and a GLUT, represent the two ways in which we can have one of ‘intelligent input-output’ and ‘functional organization isomorphic to that of an intelligent person’ without the other.
What people think they mean by ‘consciousness’ - a kind of ‘inner light’ which is either present or not—doesn’t (straightforwardly) correspond to anything that objectively exists. When we hunt around for objective properties that correlate with places where we think the ‘inner light’ is shining, we find that there’s more than one candidate. Both ‘intelligent input-output’ and the ‘intelligent functional organization’ pick out exactly those beings we believe to be conscious—our fellow humans foremost among them. But in the marginal cases where we have one but not the other, I don’t think there is a ‘further fact’ about whether ‘real consciousness’ is present.
However, we do face the ‘further question’ of how much moral value to assign in the marginal cases—should we feel guilty about switching off a simulation that no-one is looking at? Should we value a GLUT as an ‘end in itself’ rather than simply a means to our ends? (The latter question isn’t so important given that GLUTs can’t exist in practice.)
I wonder if our intuition that the physical facts underdetermine the answers to the moral questions is in some way responsible for the intuition of a mysterious non-physical ‘extra fact’ of whether so-and-so is conscious. Perhaps not, but there’s definitely a connection.
… we do face the ‘further question’ of how much moral value to assign …
Yes, and I did not even attempt to address that ‘further question’ because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.
But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question “What is the value of item X?”. The dissolution begins by requesting the clarifications “Value to whom?” and “Valuable in what context?” So, armed with this analogy, I would ask some questions:
Moral value to whom? Moral value in what context?
If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)
What people think they mean by ‘consciousness’ - a kind of ‘inner light’ which is either present or not—doesn’t (straightforwardly) correspond to anything that objectively exists.
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
This is an important idea, but I don’t think it can rescue the everyday intuition of the “inner light”.
I can readily imagine an instantiation of your sort of “consciousness” in a simple AI program of the kind we can already write. No doubt it would be an interesting project, but mere self-representation (even recursive self-representation) wouldn’t convince us that there’s “something it’s like” to be the AI. (Assume that the representations are fairly simple, and the AI is manipulating them in some fairly trivial way.)
Conversely, we think that very young children and animals are conscious in the “inner light” sense, even though we tend not to think of them as “recursively observing themselves”. (I have no idea whether and in what sense they actually do. I also don’t think “X is conscious” is unambiguously true or false in these cases.)
Could you clarify why you think that this reading assignment illuminates the question being discussed? I just reread it. For the most part, it is an argument against dualism. It argues that consciousness is (almost certainly) reducible to a physical process.
But this doesn’t have anything to do with what ArisKatsaris wrote. He was questioning whether consciousness can be reduced to a purely computational process (without “some unidentified physical reaction that’s absent to pure Turing machines”.)
Consider the following argument sketch:
Consciousness can be reduced to a physical process.
Any physical process can be abstracted as a computation.
Any computation can be modeled as a Turing Machine computation.
Therefore, consciousness can be produced on a TM.
Each step above is at least somewhat problematic. Matt1 seemed to be arguing against step 1, and Drescher does respond to that. But ArisKatsaris seemed to be arguing against step 2. My choice would be to expand the definition of ‘computation’ slightly to include the interactive, asynchronous, and analog, so that I accept step 2 but deny step 3. Over the past decade, Wegner and Goldin have published many papers arguing that computation != TM.
It may well be that you can only get consciousness if you have a non-TM computation (mind) embedded in a system of sensors and actuators (body) which itself interacts with and is embedded in within a (simulated?) real-time environment. That is, when you abstract the real-time interaction away, leaving only a TM computation, you have abstracted away an essential ingredient of consciousness.
It actually sketches what consciousness is and how it works, from which you can see how we could implement something like that as an abstract algorithm.
The value of that description is not so much in reaching a certain conclusion, but in reaching a sense of what exactly are we talking about and consequently why the question of whether “we can implement consciousness as an abstract algorithm” is uninteresting, since at that point you know more about the phenomenon than the words forming the question can access (similarly to how the question of whether crocodile is a reptile is uninteresting, once you know everything you need about crocodiles).
The problem here, I think, is that “consciousness” doesn’t get unpacked, and so most of the argument is on the level of connotations. The value of understanding the actual details behind the word, even if just a little bit, is in breaking this predicament.
I think I can see a rube/blegg situation here.
A TM computation perfectly modelling a human brain (let’s say) but without any real-time interaction, and a GLUT, represent the two ways in which we can have one of ‘intelligent input-output’ and ‘functional organization isomorphic to that of an intelligent person’ without the other.
What people think they mean by ‘consciousness’ - a kind of ‘inner light’ which is either present or not—doesn’t (straightforwardly) correspond to anything that objectively exists. When we hunt around for objective properties that correlate with places where we think the ‘inner light’ is shining, we find that there’s more than one candidate. Both ‘intelligent input-output’ and the ‘intelligent functional organization’ pick out exactly those beings we believe to be conscious—our fellow humans foremost among them. But in the marginal cases where we have one but not the other, I don’t think there is a ‘further fact’ about whether ‘real consciousness’ is present.
However, we do face the ‘further question’ of how much moral value to assign in the marginal cases—should we feel guilty about switching off a simulation that no-one is looking at? Should we value a GLUT as an ‘end in itself’ rather than simply a means to our ends? (The latter question isn’t so important given that GLUTs can’t exist in practice.)
I wonder if our intuition that the physical facts underdetermine the answers to the moral questions is in some way responsible for the intuition of a mysterious non-physical ‘extra fact’ of whether so-and-so is conscious. Perhaps not, but there’s definitely a connection.
Yes, and I did not even attempt to address that ‘further question’ because it seems to me that that question is at least an order of magnitude more confused than the relatively simple question about consciousness.
But, if I were to attempt to address it, I would begin with the lesson from Econ 101 that dissolves the question “What is the value of item X?”. The dissolution begins by requesting the clarifications “Value to whom?” and “Valuable in what context?” So, armed with this analogy, I would ask some questions:
Moral value to whom? Moral value in what context?
If I came to believe that the people around me were p-zombies, would that opinion change my moral obligations toward them? If you shared my belief, would that change your answer to the previous question?
Believed to be conscious by whom? Believed to be conscious in what context? Is it possible that a program object could be conscious in some simulated universe, using some kind of simulated time, but would not be conscious in the real universe in real time?
My example was one where (i) the ‘whom’ and the ‘context’ are clear and yet (ii) this obviously doesn’t dissolve the problem.
It may be a step toward dissolving the problem. It suggests the questions:
Is it possible that an intelligent software object (like those in this novella by Ted Chiang) which exists within our space-time and can interact with us might have moral value very different from simulated intelligences in a simulated universe with which we cannot interact in real time?
Is it possible for an AI of the Chiang variety to act ‘immorally’ toward us? Toward each other? If so, what “makes” that action immoral?
What about AIs of the second sort? Clearly they cannot act immorally toward us, since they don’t interact with us. But can they act immorally toward each other? What is it about that action that ‘makes’ it immoral?
My own opinion, which I won’t try to convince you of, is that there is no moral significance without interaction and reciprocity (in a fairly broad sense of those two words.)
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
This is an important idea, but I don’t think it can rescue the everyday intuition of the “inner light”.
I can readily imagine an instantiation of your sort of “consciousness” in a simple AI program of the kind we can already write. No doubt it would be an interesting project, but mere self-representation (even recursive self-representation) wouldn’t convince us that there’s “something it’s like” to be the AI. (Assume that the representations are fairly simple, and the AI is manipulating them in some fairly trivial way.)
Conversely, we think that very young children and animals are conscious in the “inner light” sense, even though we tend not to think of them as “recursively observing themselves”. (I have no idea whether and in what sense they actually do. I also don’t think “X is conscious” is unambiguously true or false in these cases.)