My response to this is extremely negative, since I could hardly disagree with the assumptions of this post more. It is just so wrong. I’m not even sure there is a point in engaging across this obvious complete disagreement, and my commenting at all may be pointless. Even if you grant that there are many possible definitions of consciousness, and that people mean somewhat different things by them, the premise of this article is completely and obviously wrong since chatbots clearly do not have any consciousness, by any even vaguely plausible definition. It is so blatantly obvious. There is literally no reason to believe an LLM is conscious even if I were to allow terribly weak definitions of consciousness. (It may be possible for something we recognize as an AI to be conscious, but definitely not an LLM. We know how LLMs work, and they just don’t do these things.)
As to the things you are greater than 90% sure of, I am greater than 99% certain they do not ‘experience’: Introspection, purposefulness, experiential coherence, perception of perception, awareness of awareness, sense of cognitive extent, and memory of memory. Only symbol grounding am I not greater than 99% sure they don’t ‘experience’ because instead they just have an incredibly tiny amount of grounding, but symbol grounding is not consciousness either even if in full. Grounding is knowledge and understanding related, but is clearly not consciousness. Also, purposefulness is clearly not even especially related to consciousness (you would have to think a purely mechanical thermostat is ‘conscious’ with this definition.).
Similarly, I am greater than 99% certain that they experience none of the ones you are 50% sure of: #4 (holistic experience of complex emotions), #5 (experience of distinctive affective states), #6 (pleasure and pain), #12 (alertness), #13 (detection of cognitive uniqueness), and #14 (mind-location). A number of these clearly aren’t related to consciousness either (pleasure/pain, alertness, probably detection of cognitive uniqueness though I can’t be sure of the last because it is too vague to be sure what you mean).
Additionally I am 100% sure of the ones you are only 75% sure of. There is logically no possibility that current llms have proprioception, awakeness, or vestibular sense. (Why in the world is that even mentioned?) (Awakeness is definitely not even fully required for consciousness, while the other two have nothing to do with it at all.)
Anyone who thinks an LLM has consciousness is just anthropomorphizing anything that has the ability to put together sentences. (Which, to be fair, used to be a heuristic that worked pretty well.)
The primary reason people should care about consciousness is related to the question of ‘are they people?’ (in the same meaning that humans, aliens, and certain androids in Star Trek or other scifi are people.) It is 100% certain that unless panpsychism is true (highly unlikely, and equally applicable to a rock), this kind of device is not a person.
I’m not going to list why you are wrong on every point in the appendix, just some. Nothing in your evidence seems at all convincing. Introspection: The ability to string together an extra sentence on what a word in a sentence could mean isn’t even evidence on introspection. (At most it would be evidence of ability to do that about others, not itself.) We know it doesn’t know why it did something. Purposefulness: Not only irrelevant to consciousness but also still not evidence. It just looks up in its context window what you told it to do and then comes up with another sentence that fits. Perception of perception: You are still tricking yourself with anthropomorphization. The answer to the question is always more likely a sentence like ‘no’. The actual trick would be giving them a picture where the answer should be the opposite of the normal ‘no’ answer. As you continue on, you keep asking leading questions in a way that have obvious answers, and this is exactly what it is designed to do. We know how an LLM operates, and what it does is follow leads to complete sentences. You don’t seem to understand symbol grounding, which is not about getting it to learn new words disconnected from the world, but about how the words relate to the world.
I think many of the things Critch has listed as definitions of consciousness are not “weak versions of some strong version”, they’re just different things.
You bring up a few times that LLMs don’t “experience” [various things Critch lists here]. I agree, they pretty likely don’t (in most cases). But, part of what I interpreted Critch’s point here to be was that there are many things that people mean by “consciousness” that aren’t actually about “experience” or “qualia” or whatnot.
For example, I’d bet (75%) that when Critch says they have introspection, he isn’t making any claims about them “experiencing” anything at all – I think he’s instead saying “in the same way that their information processing system knows facts about Rome and art and biology and computer programming, and can manipulate those facts, it can also know and manipulate facts about it’s thoughts and internal states.” (whereas other ML algorithms may not be able to know and manipulate their thoughts and internal states)
Purposefulness: Not only irrelevant to consciousness but...
A major point Critch was making in previous post is that when people say “consciousness”, this is one of the things they sometimes mean. The point is not that LLMs are conscious the way you are using the word, but that when you see debates about whether they are conscious, it will include some people who think it means “purposefulness.”
I agree that people use consciousness to mean different things, but some definitions need to be ignored as clearly incorrect. If someone wants to use a definition of ‘red’ that includes large amounts of ‘green’, we should ignore them. Words mean something, and can’t be stretched to include whatever the speaker wants them to if we are to speak the same language (so leaving aside things like how ‘no’ means ‘of’ in Japanese). Things like purposefulness are their own separate thing, and have a number of terms meant to be used with them, that we can meaningfully talk about if people choose to use the right words. If ‘introspection’ isn’t meant as the internal process, don’t use the term because it is highly misleading. I do think you are probably right about what Critch thinks when using the term introspection, but he would still be wrong if he meant that (since they are reflecting on word choice not on the internal states that led to it.)
I don’t feel very hopeful about the conversation atm, but fwiw I feel like you are missing a fairly important point while being pretty overconfident about not having missed it.
Putting a different way: is there a percent of people who could disagree with you about what consciousness means, which would convince you that you it’s not as straightforward as assuming you have the correct definition of consciousness, and that you can ignore everyone else? If <50% of people agreed with you? If <50% of the people with most of the power?
(This is not about whether your definition is good, or the most useful, or whatnot – only that, if lots of people turned out to be mean different things by it, would it still particularly matter whether your definition was the “right” one?)
(My own answer is that if like >75% of people agreed on what consciousness means, I’d be like “okay yeah Critch’s point isn’t super compelling”. If it was between like 50 − 75% of people I’d like “kinda edge case.” If it’s <50% of people agreeing on consciousness, I don’t think it matters much what definition is “correct.”)
I did not use the term ‘self-evident’ and I do not necessarily believe it is self-evident, because theoretically we can’t prove anything isn’t conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it’s that they just clearly aren’t conscious. ‘Almost no reliable evidence’ in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It would require an incredible amount of evidence to make the idea that we should consider that it might be conscious a reasonable one given what we know. If panpsychism is true, then they might be conscious (as would a rock!), but panpsychism is incredibly unlikely.
My response to this is extremely negative, since I could hardly disagree with the assumptions of this post more. It is just so wrong. I’m not even sure there is a point in engaging across this obvious complete disagreement, and my commenting at all may be pointless. Even if you grant that there are many possible definitions of consciousness, and that people mean somewhat different things by them, the premise of this article is completely and obviously wrong since chatbots clearly do not have any consciousness, by any even vaguely plausible definition. It is so blatantly obvious. There is literally no reason to believe an LLM is conscious even if I were to allow terribly weak definitions of consciousness. (It may be possible for something we recognize as an AI to be conscious, but definitely not an LLM. We know how LLMs work, and they just don’t do these things.)
As to the things you are greater than 90% sure of, I am greater than 99% certain they do not ‘experience’: Introspection, purposefulness, experiential coherence, perception of perception, awareness of awareness, sense of cognitive extent, and memory of memory. Only symbol grounding am I not greater than 99% sure they don’t ‘experience’ because instead they just have an incredibly tiny amount of grounding, but symbol grounding is not consciousness either even if in full. Grounding is knowledge and understanding related, but is clearly not consciousness. Also, purposefulness is clearly not even especially related to consciousness (you would have to think a purely mechanical thermostat is ‘conscious’ with this definition.).
Similarly, I am greater than 99% certain that they experience none of the ones you are 50% sure of: #4 (holistic experience of complex emotions), #5 (experience of distinctive affective states), #6 (pleasure and pain), #12 (alertness), #13 (detection of cognitive uniqueness), and #14 (mind-location). A number of these clearly aren’t related to consciousness either (pleasure/pain, alertness, probably detection of cognitive uniqueness though I can’t be sure of the last because it is too vague to be sure what you mean).
Additionally I am 100% sure of the ones you are only 75% sure of. There is logically no possibility that current llms have proprioception, awakeness, or vestibular sense. (Why in the world is that even mentioned?) (Awakeness is definitely not even fully required for consciousness, while the other two have nothing to do with it at all.)
Anyone who thinks an LLM has consciousness is just anthropomorphizing anything that has the ability to put together sentences. (Which, to be fair, used to be a heuristic that worked pretty well.)
The primary reason people should care about consciousness is related to the question of ‘are they people?’ (in the same meaning that humans, aliens, and certain androids in Star Trek or other scifi are people.) It is 100% certain that unless panpsychism is true (highly unlikely, and equally applicable to a rock), this kind of device is not a person.
I’m not going to list why you are wrong on every point in the appendix, just some. Nothing in your evidence seems at all convincing.
Introspection: The ability to string together an extra sentence on what a word in a sentence could mean isn’t even evidence on introspection. (At most it would be evidence of ability to do that about others, not itself.) We know it doesn’t know why it did something.
Purposefulness: Not only irrelevant to consciousness but also still not evidence. It just looks up in its context window what you told it to do and then comes up with another sentence that fits.
Perception of perception: You are still tricking yourself with anthropomorphization. The answer to the question is always more likely a sentence like ‘no’. The actual trick would be giving them a picture where the answer should be the opposite of the normal ‘no’ answer.
As you continue on, you keep asking leading questions in a way that have obvious answers, and this is exactly what it is designed to do. We know how an LLM operates, and what it does is follow leads to complete sentences.
You don’t seem to understand symbol grounding, which is not about getting it to learn new words disconnected from the world, but about how the words relate to the world.
I think many of the things Critch has listed as definitions of consciousness are not “weak versions of some strong version”, they’re just different things.
You bring up a few times that LLMs don’t “experience” [various things Critch lists here]. I agree, they pretty likely don’t (in most cases). But, part of what I interpreted Critch’s point here to be was that there are many things that people mean by “consciousness” that aren’t actually about “experience” or “qualia” or whatnot.
For example, I’d bet (75%) that when Critch says they have introspection, he isn’t making any claims about them “experiencing” anything at all – I think he’s instead saying “in the same way that their information processing system knows facts about Rome and art and biology and computer programming, and can manipulate those facts, it can also know and manipulate facts about it’s thoughts and internal states.” (whereas other ML algorithms may not be able to know and manipulate their thoughts and internal states)
A major point Critch was making in previous post is that when people say “consciousness”, this is one of the things they sometimes mean. The point is not that LLMs are conscious the way you are using the word, but that when you see debates about whether they are conscious, it will include some people who think it means “purposefulness.”
I agree that people use consciousness to mean different things, but some definitions need to be ignored as clearly incorrect. If someone wants to use a definition of ‘red’ that includes large amounts of ‘green’, we should ignore them. Words mean something, and can’t be stretched to include whatever the speaker wants them to if we are to speak the same language (so leaving aside things like how ‘no’ means ‘of’ in Japanese). Things like purposefulness are their own separate thing, and have a number of terms meant to be used with them, that we can meaningfully talk about if people choose to use the right words. If ‘introspection’ isn’t meant as the internal process, don’t use the term because it is highly misleading. I do think you are probably right about what Critch thinks when using the term introspection, but he would still be wrong if he meant that (since they are reflecting on word choice not on the internal states that led to it.)
I don’t feel very hopeful about the conversation atm, but fwiw I feel like you are missing a fairly important point while being pretty overconfident about not having missed it.Putting a different way: is there a percent of people who could disagree with you about what consciousness means, which would convince you that you it’s not as straightforward as assuming you have the correct definition of consciousness, and that you can ignore everyone else? If <50% of people agreed with you? If <50% of the people with most of the power?
(This is not about whether your definition is good, or the most useful, or whatnot – only that, if lots of people turned out to be mean different things by it, would it still particularly matter whether your definition was the “right” one?)
(My own answer is that if like >75% of people agreed on what consciousness means, I’d be like “okay yeah Critch’s point isn’t super compelling”. If it was between like 50 − 75% of people I’d like “kinda edge case.” If it’s <50% of people agreeing on consciousness, I don’t think it matters much what definition is “correct.”)
I agree LLMs are probably not conscious, but I don’t think it’s self-evident they’re not; we have almost no reliable evidence one way or the other.
I did not use the term ‘self-evident’ and I do not necessarily believe it is self-evident, because theoretically we can’t prove anything isn’t conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it’s that they just clearly aren’t conscious. ‘Almost no reliable evidence’ in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It would require an incredible amount of evidence to make the idea that we should consider that it might be conscious a reasonable one given what we know. If panpsychism is true, then they might be conscious (as would a rock!), but panpsychism is incredibly unlikely.