I agree that today the concept of consciousness is very poorly defined, but I think that in the future it will be possible to define it in a way that will make sense, or at least that we will be able to correct our current intuitions.
How can one tell if a human is conscious?
In humans, we have clues. For example, it is possible to experiment by varying the duration of a light stimulus. There are stimuli of very short duration that are processed by visual areas V1 and V2, but which do not go up to the parietal cortex. For stimuli of slightly longer duration, the light spot induces a conscious response. The subject can then say: yes, I saw the spot. In humans, the study can be done on a declarative basis.
How can one tell if an AI is conscious?
At present, such an experiment cannot be performed on GPT (unless an analogy is made between the duration of a stimulus for humans and the activation in the attention layers of the transformer?)
Indications of consciousness in an AI would be: - A global workspace, which allows a central piece of information to be accessible from anywhere in the system (e.g. in the schema of the paper attention is all you need, the link between the last encoder and all decoders, or recently in the Socratic models, an implementation of a common language module) - consciousness seems to be a way of processing information in a complete Turing fashion, but it is quite slow, whereas unconscious processes automatically a multitude of bits in parallel. So I would guess another clue would be a distinction between several modes of functioning:
an automatic mode, system 1, such as GPU parallelized vision networks today
vs a slower mode, system 2, which would be a very general program which cannot be parallelized and should be run on cpu.
- Perhaps an implementation of metacognition or a reflexive system: parts of the neural network that attempt to predict the future or current state of other parts of the neural network. But this point depends on whether one considers the mirror test to be necessary for consciousness.
Yeah, I know, I just wanted to begin answering with this and to present in one sentence (and without mentioning it, my bad...) the concept of neural correlates of consciousness
I agree that today the concept of consciousness is very poorly defined, but I think that in the future it will be possible to define it in a way that will make sense, or at least that we will be able to correct our current intuitions.
How can one tell if a human is conscious?
In humans, we have clues. For example, it is possible to experiment by varying the duration of a light stimulus. There are stimuli of very short duration that are processed by visual areas V1 and V2, but which do not go up to the parietal cortex. For stimuli of slightly longer duration, the light spot induces a conscious response. The subject can then say: yes, I saw the spot. In humans, the study can be done on a declarative basis.
How can one tell if an AI is conscious?
At present, such an experiment cannot be performed on GPT (unless an analogy is made between the duration of a stimulus for humans and the activation in the attention layers of the transformer?)
Indications of consciousness in an AI would be:
- A global workspace, which allows a central piece of information to be accessible from anywhere in the system (e.g. in the schema of the paper attention is all you need, the link between the last encoder and all decoders, or recently in the Socratic models, an implementation of a common language module)
- consciousness seems to be a way of processing information in a complete Turing fashion, but it is quite slow, whereas unconscious processes automatically a multitude of bits in parallel. So I would guess another clue would be a distinction between several modes of functioning:
an automatic mode, system 1, such as GPU parallelized vision networks today
vs a slower mode, system 2, which would be a very general program which cannot be parallelized and should be run on cpu.
- Perhaps an implementation of metacognition or a reflexive system: parts of the neural network that attempt to predict the future or current state of other parts of the neural network. But this point depends on whether one considers the mirror test to be necessary for consciousness.
- ?
That’s the easy case ..each of us we can tell we are conscious by introspection, and a normal-seeming and behaving person isn’t that different.
Yeah, I know, I just wanted to begin answering with this and to present in one sentence (and without mentioning it, my bad...) the concept of neural correlates of consciousness