I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained.
Regardless of that though, I just want to focus on one of your “referents of consciousness” here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following
#9: Symbol grounding. Even within a single interaction, an LLM can learn to associate a new symbol to a particular meaning, report on what the symbol means, and report that it knows what the symbol means.
The behavioural capacity you describe does not suffice for symbol grounding. Indeed, the phrase “associate a new symbol to a particular meaning” begs the question, because the symbol grounding problem asks how it is that symbolic representations in computing systems can acquire meaning in the first place.
The most famous proposal for what it would take to ground symbols was proposed by Harnad in his classic 1990 paper. Harnad thought grounding was sensorimotor in nature and required both iconic and categorical representations, where iconic representations are formed via “internal analog transforms of the projections of distal objects on our sensory surfaces” and categorical representations are “those ‘invariant features’ of the sensory projection that will reliably distinguish a member of a category from any nonmembers” (Harnad though connectionist networks were good candidates for forming such representations and time has shown him to be right). Now, it seems to me highly unlikely that LLMs exhibit their behavioural capacities in virtue of iconic representations of the relevant sort, since they do not have “sensory surfaces” in anything like the right kind of way. Perhaps you disagree, but merely describing the behavioural capacities is not evidence.
Notably, Harnad’s proposal is actually one of the more minimal answers in the literature to the symbol grounding problem. Indeed, he received significant criticism for putting the bar so low. More demanding theorists have posited the need for multimodal integration (including cognitive and sensory modalities), embodiment (including external and internal bodily processes), normative functions, a social environment and more besides. See Barsalou for a nice recent discussion and Mollo and Milliere for an interesting take on LLMs in particular.
I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained.
Regardless of that though, I just want to focus on one of your “referents of consciousness” here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following
The behavioural capacity you describe does not suffice for symbol grounding. Indeed, the phrase “associate a new symbol to a particular meaning” begs the question, because the symbol grounding problem asks how it is that symbolic representations in computing systems can acquire meaning in the first place.
The most famous proposal for what it would take to ground symbols was proposed by Harnad in his classic 1990 paper. Harnad thought grounding was sensorimotor in nature and required both iconic and categorical representations, where iconic representations are formed via “internal analog transforms of the projections of distal objects on our sensory surfaces” and categorical representations are “those ‘invariant features’ of the sensory projection that will reliably distinguish a member of a category from any nonmembers” (Harnad though connectionist networks were good candidates for forming such representations and time has shown him to be right). Now, it seems to me highly unlikely that LLMs exhibit their behavioural capacities in virtue of iconic representations of the relevant sort, since they do not have “sensory surfaces” in anything like the right kind of way. Perhaps you disagree, but merely describing the behavioural capacities is not evidence.
Notably, Harnad’s proposal is actually one of the more minimal answers in the literature to the symbol grounding problem. Indeed, he received significant criticism for putting the bar so low. More demanding theorists have posited the need for multimodal integration (including cognitive and sensory modalities), embodiment (including external and internal bodily processes), normative functions, a social environment and more besides. See Barsalou for a nice recent discussion and Mollo and Milliere for an interesting take on LLMs in particular.