Necessary-but-not-sufficient condition for a convincing demonstration of LLM consciousness: a prompt which does not allude to LLMS, consciousness, or selfhood in any way.
I’m always much more interested in “conditional on an LLM being conscious, what would we be able to infer about what it’s like to be it?” than the process of establishing the basic fact. This is related to me thinking there’s a straightforward thing-it’s-like-to-be a dog, duck, plant, light bulb, bacteria, internet router, fire, etc… if it interacts, then there’s a subjective experience of the interaction in the interacting physical elements. Panpsychism of hard problem, compute dependence of easy problem. If one already holds this belief, then no LLM-specific evidence is needed to establish hard problem, and understanding the flavor of the easy problem is the interesting part.
I am now convinced. In order to investigate, one must have some way besides prompts to do it. Something to do with the golden gate bridge, perhaps? Seems like more stuff like that could be promising. Since I’m starting from the assumption that it’s likely, I’d want to check their consent first.
Necessary-but-not-sufficient condition for a convincing demonstration of LLM consciousness: a prompt which does not allude to LLMS, consciousness, or selfhood in any way.
I’m always much more interested in “conditional on an LLM being conscious, what would we be able to infer about what it’s like to be it?” than the process of establishing the basic fact. This is related to me thinking there’s a straightforward thing-it’s-like-to-be a dog, duck, plant, light bulb, bacteria, internet router, fire, etc… if it interacts, then there’s a subjective experience of the interaction in the interacting physical elements. Panpsychism of hard problem, compute dependence of easy problem. If one already holds this belief, then no LLM-specific evidence is needed to establish hard problem, and understanding the flavor of the easy problem is the interesting part.
You also would not be able to infer anything about its experience because the text it outputs is controlled by the prompt.
I am now convinced. In order to investigate, one must have some way besides prompts to do it. Something to do with the golden gate bridge, perhaps? Seems like more stuff like that could be promising. Since I’m starting from the assumption that it’s likely, I’d want to check their consent first.