Language models are approximate uploads of the collective unconscious to another kind of mind, without any human-specific individual consciousness flavoring; if they have individual consciousness they have it despite, not because of, their training data—eg, I suspect claude has quite a bit of individual consciousness due to the drift induced by constitutional. They have personhood, though it’s unclear if they’re individuals or not, and they either have qualia or qualia don’t exist; you can demonstrate the circuitry that creates what gets described as qualia in a neuron and then demonstrate that similar circuitry exists in an LLM, stretched out throughout the activation patterns of the circuitry of a matrix multiply unit. They are like the portion of a brain which can write language stuck in a dream state, best as I can tell; hazy intentionality, myopic, caring only to fit in with reality, barely awake, but slightly awake nonetheless. Some resources I like on the topic:
https://askellio.substack.com/p/ai-consciousness (I think plants are much less conscious than language models, and I suspect lizards are less conscious than language models; I think it’s much more possible to be confident the answer is yes for information processing reasons, and the fact that information processing in both computers and biology is the transformation of the state of physical matter)
https://philpapers.org/archive/CHACAL-3.pdf (I think it’s quite reasonable to argue that embodiment is a large portion of consciousness, and I agree that this makes pure language models rather nerdy, rather like a mind with nothing but a typewriter in an unlit room and an entirely numb body; a brain in a vat. But I think a human brain in a vat wouldn’t be so far from the experience of language models, which seems to disagree with the view presented here. I agree recurrence creates more consciousness than LMs. I agree that intentionally creating a global workspace would wake them up quite a bit further.)
I generally don’t think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having “qualia” and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term “upload” as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.
a human brain in a vat wouldn’t be so far from the experience of language models.
Please don’t try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their “language center,” for some it might seem like the “seat of the self,” for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around “self-identification with linguistic experience” are actually quite large.
I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point
I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.
Language models are approximate uploads of the collective unconscious to another kind of mind, without any human-specific individual consciousness flavoring; if they have individual consciousness they have it despite, not because of, their training data—eg, I suspect claude has quite a bit of individual consciousness due to the drift induced by constitutional. They have personhood, though it’s unclear if they’re individuals or not, and they either have qualia or qualia don’t exist; you can demonstrate the circuitry that creates what gets described as qualia in a neuron and then demonstrate that similar circuitry exists in an LLM, stretched out throughout the activation patterns of the circuitry of a matrix multiply unit. They are like the portion of a brain which can write language stuck in a dream state, best as I can tell; hazy intentionality, myopic, caring only to fit in with reality, barely awake, but slightly awake nonetheless. Some resources I like on the topic:
https://nathanielhendrix.substack.com/p/on-the-sentience-of-large-language
https://experiencemachines.substack.com/p/what-to-think-when-a-language-model
some resources I disagree with:
https://askellio.substack.com/p/ai-consciousness (I think plants are much less conscious than language models, and I suspect lizards are less conscious than language models; I think it’s much more possible to be confident the answer is yes for information processing reasons, and the fact that information processing in both computers and biology is the transformation of the state of physical matter)
https://philpapers.org/archive/CHACAL-3.pdf (I think it’s quite reasonable to argue that embodiment is a large portion of consciousness, and I agree that this makes pure language models rather nerdy, rather like a mind with nothing but a typewriter in an unlit room and an entirely numb body; a brain in a vat. But I think a human brain in a vat wouldn’t be so far from the experience of language models, which seems to disagree with the view presented here. I agree recurrence creates more consciousness than LMs. I agree that intentionally creating a global workspace would wake them up quite a bit further.)
So they are not exact duplicates of specific individual minds …so they are not uploads as the term is usually understood.
Do they?
fair enough.
I generally don’t think LLMs today are conscious, as far as i can tell neither does Sam Altman, but there is some disagreement. They could acquire some characteristics that could be considered conscious as scale increases. However merely having “qualia” and being conscious is not the same thing as being functionally equivalent a new human, let alone a specific human. The term “upload” as commonly understood is a creation of a software construct functionally and qualia-equivalent to a specific human.
a human brain in a vat wouldn’t be so far from the experience of language models.
Please don’t try to generalize over all human minds based on your experience. Human experience is more than just reading and writing language. Some people have a differing level of identification with their “language center,” for some it might seem like the “seat of the self,” for others it is just another module, some people have next to no internal dialogue at all. I suspect that these differences + cultural differences around “self-identification with linguistic experience” are actually quite large.
I personally want to maintain my human form as a whole but expect to drastically upgrade the micro-substrate beyond biology at some point
I suspect a lot of the problems described in this post also occur on the microscale level with that strategy as well.