What people think they mean by ‘consciousness’ - a kind of ‘inner light’ which is either present or not—doesn’t (straightforwardly) correspond to anything that objectively exists.
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
This is an important idea, but I don’t think it can rescue the everyday intuition of the “inner light”.
I can readily imagine an instantiation of your sort of “consciousness” in a simple AI program of the kind we can already write. No doubt it would be an interesting project, but mere self-representation (even recursive self-representation) wouldn’t convince us that there’s “something it’s like” to be the AI. (Assume that the representations are fairly simple, and the AI is manipulating them in some fairly trivial way.)
Conversely, we think that very young children and animals are conscious in the “inner light” sense, even though we tend not to think of them as “recursively observing themselves”. (I have no idea whether and in what sense they actually do. I also don’t think “X is conscious” is unambiguously true or false in these cases.)
It does, to some extent. There is a simple description that moves the discussion further. Namely, consciousness is a sensory modality that observes its own operation, and as a result it also observes itself observing its own operation, and so on; as well as observing external input, observing itself observing external input, and so on; and observing itself determining external output, etc.
This is an important idea, but I don’t think it can rescue the everyday intuition of the “inner light”.
I can readily imagine an instantiation of your sort of “consciousness” in a simple AI program of the kind we can already write. No doubt it would be an interesting project, but mere self-representation (even recursive self-representation) wouldn’t convince us that there’s “something it’s like” to be the AI. (Assume that the representations are fairly simple, and the AI is manipulating them in some fairly trivial way.)
Conversely, we think that very young children and animals are conscious in the “inner light” sense, even though we tend not to think of them as “recursively observing themselves”. (I have no idea whether and in what sense they actually do. I also don’t think “X is conscious” is unambiguously true or false in these cases.)