I don’t think so. Compare the following two requests:
(1) Describe a refrigerator without using the word refrigerator or near-synonyms.
(2) Describe the structure of a refrigerator in terms of moving parts and/or subprocesses.
The first request demands the tabooing of words; the second request demands an answer of a particular (theory-laden) form. I think the OPs request is like request 2. What’s more, I expect submitting request 2 to a random sample of people would license the same erroneous conclusion about “refrigerator” as it did about “consciousness”.
This is not to say there are no special challenges associated with “consciousness” that do not hold for “refrigerator”. Indeed, I believe there are. However, the basic point that people can be referring to a single phenomenon even if they have different beliefs about the phenomenon’s underlying structure seems to me fairly straightforward.
Edit: I see sunwillrise gave a much more detailed response already. That response seems pretty much on the money to me.
I will also point people to this paper if they are interested in reading an attempt by a prominent philosopher of consciousness at defining it in minimally objectionable terms.
I think you are very confused about how to interpret disagreements around which mental processes ground consciousness. These disagreements do not entail a fundamental disagreement about what consciousness is as a phenomenon to be explained.
Regardless of that though, I just want to focus on one of your “referents of consciousness” here, because I also think the reasoning you provide for your particular claims is extremely weak. You write the following
The behavioural capacity you describe does not suffice for symbol grounding. Indeed, the phrase “associate a new symbol to a particular meaning” begs the question, because the symbol grounding problem asks how it is that symbolic representations in computing systems can acquire meaning in the first place.
The most famous proposal for what it would take to ground symbols was proposed by Harnad in his classic 1990 paper. Harnad thought grounding was sensorimotor in nature and required both iconic and categorical representations, where iconic representations are formed via “internal analog transforms of the projections of distal objects on our sensory surfaces” and categorical representations are “those ‘invariant features’ of the sensory projection that will reliably distinguish a member of a category from any nonmembers” (Harnad though connectionist networks were good candidates for forming such representations and time has shown him to be right). Now, it seems to me highly unlikely that LLMs exhibit their behavioural capacities in virtue of iconic representations of the relevant sort, since they do not have “sensory surfaces” in anything like the right kind of way. Perhaps you disagree, but merely describing the behavioural capacities is not evidence.
Notably, Harnad’s proposal is actually one of the more minimal answers in the literature to the symbol grounding problem. Indeed, he received significant criticism for putting the bar so low. More demanding theorists have posited the need for multimodal integration (including cognitive and sensory modalities), embodiment (including external and internal bodily processes), normative functions, a social environment and more besides. See Barsalou for a nice recent discussion and Mollo and Milliere for an interesting take on LLMs in particular.