You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach.
(I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, ‘almost certainly not’ unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles.
There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is.
An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn’t really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform.
As you seem to know, LLMs are actually extremely simple programs of extremely large matrices with values chosen by the very basic system of gradient descent. Nothing about gradient descent is especially interesting from a consciousness point of view. It’s basically a massive use very simplified ODE solvers in a chain, which are extremely well understood and clearly have no consciousness at all if anything mathematical doesn’t. It could also be viewed as just a very large number of variables in a massive but simple statistical regression. Notably, if gradient descent were related to consciousness directly, we would still have no reason to believe that an LLM doing inference rather than training would be conscious. Simple matrix math also doesn’t seem like much of a candidate for consciousness either.
Someone trying to make the case for consciousness would thus need to think it likely that one of the other mechanisms in LLMs are related to consciousness, but LLMs are actually missing a great many mechanisms that would enable things like self-reflection and awareness (including a number that were included in primitive earlier neural networks such as recursion and internal loops). The people trying to make up for those omissions do a number of things to attempt to recreate it (with ‘attention’ being the built-in one, but also things like adding in the use of previous outputs), but those very simple approaches don’t seem like likely candidates for consciousness (to me).
Thus, it remains extremely unlikely that an LLM is conscious.
When you say we don’t know what mechanisms are used, you seem to be talking about not understanding a completely different thing than I am saying we understand. We don’t understand exactly what each weight means (except in some rare cases that some researchers have seemingly figured out) and why it was chosen to be that rather than any number of other values that would work out similarly, but that is most likely unimportant to my point about mechanisms. This is, as far as I can tell, an actual ambiguity in the meaning of ‘mechanism’ that we can be talking about completely different levels at which mechanisms could operate, and I am talking about the very lowest ones.
Note that I do not usually make a claim about the mechanisms underlying consciousness in general except that it is unlikely to be these extremely basic physical and mathematical ones. I genuinely do not believe that we know enough about consciousness to nail it down to even a small subset of theories. That said, there are still a large number of theories of consciousness that either don’t make internal sense, or seem like components even if part of it.
Pedantically, if consciousness is related to ‘self-modeling’ the implications involve it needing to be internal for the basic reason that it is just ‘modeling’ otherwise. I can’t prove that external modeling isn’t enough for consciousness, (how could I?) but I am unaware of anyone making that contention.
So, would your example be ‘self-modeling’? Your brief sentence isn’t enough for me to be sure what you mean. But if it is related to people’s recent claims related to introspection on this board, then I don’t think so. It would be modeling the external actions of an item that happened to turn out to be itself. For example, if I were to read the life story of a person I didn’t realize was me, and make inferences about how the subject would act under various conditions, that isn’t really self-modeling. On the other hand, in the comments to that, I actually proposed that you could train it on its own internal states, and that could maybe have something to do with this (if self-modeling is true). This is something we do not train current llms on at all though.
As far as I can tell (as someone who finds the very idea of illusionism strange), illusionism is itself not a useful point of view in regards to this dispute, because it would make the question of whether an LLM was conscious pretty moot. Effectively, the answer would be something to the effect of ‘why should I care?’ or ‘no.’ or even ‘to the same extent as people.’ regardless of how an LLM (or ordinary computer program, almost all of which process information heavily) works depending on the mood of the speaker. If consciousness is an illusion, we aren’t talking about anything real, and it is thus useful to ignore illusionism when talking about this question.
As I mentioned before, I do not have a particularly strong theory for what consciousness actually is or even necessarily a vague set of explanations that I believe in more or less strongly.
I can’t say I’ve heard of ‘attention schema theory’ before nor some of the other things you mention next like ‘efference copy’ (but the latter seems to be all about the body which doesn’t seem all that promising a theory for what consciousness may be, though I also can’t rule out that it being part of it since the idea is that it is used in self-modeling which I mentioned earlier I can’t actually rule out either.).
My pet theory of emotions is that they are simply a shorthand for ‘you should react in ways appropriate ways to a situation that is...’ a certain way. For example (and these were not carefully chosen examples) anger would be ‘a fight’, happiness would be ‘very good’, sadness would be ‘very poor’ and so on. And more complicated emotions might obviously include things like it being a good situation but also high stakes. The reason for using a shorthand would be because our conscious mind is very limited in what it can fit at once. Despite this being uncertain, I find this a much more likely than emotions themselves being consciousness.
I would explain things like blindsight (from your ipsundrum link) through having a subconscious mind that gathers information and makes a shorthand before passing it to the rest of the mind (much like my theory of emotions). The shorthand without the actual sensory input could definitely lead to not seeing but being able to use the input to an extent nonetheless. Like you, I see no reason why this should be limited to the one pathway they found in certain creatures (in this case mammals and birds). I certainly can’t rule out that this is related directly to consciousness, but I think it more likely to be another input to consciousness rather than being consciousness.
Side note, I would avoid conflating consciousness and sentience (like the ipsundrum link seems to). Sensory inputs do not seem overly necessary to consciousness, since I can experience things consciously that do not seem related to the senses. I am thus skeptical of the idea that consciousness is built on them. (If I were really expounding my beliefs, I would probably go on a diatribe about the term ‘sentience’ but I’ll spare you that. As much as I dislike sentience based consciousness theories, I would admit them as being theories of consciousness in many cases.)
Again, I can’t rule out global workspace theory, but I am not sure how it is especially useful. What makes a globabl workspace conscious that doesn’t happen in an ordinary computer program I could theoretically program myself? A normal program might take a large number of inputs, process them separately, and then put it all together in a global workspace. It thus seems more like a theory of ‘where does it occur’ than ‘what it is’.
‘Something to do with electrical flows in the brain’ is obviously not very well specified, but it could possibly be meaningful if you mean the way a pattern of electrical flows causes future patterns of electrical flows as distinct from the physical structures the flows travel through.
Biological nerves being the basis of consciousness directly is obviously difficult to evaluate. It seems too simple, and I am not sure whether there is a possibility of having such a tiny amount of consciousness that then add up to our level of consciousness. (I am also unsure about whether there is a spectrum of consciousness beyond the levels known within humans).
I can’t say I would believe a slime mold is conscious (but again, can’t prove it is impossible.) I would probably not believe any simple animals (like ants) are either though even if someone had a good explanation for why their theory says the ant would be. Ants and slime molds still seem more likely to be conscious to me than current LLM style AI though.
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach.
(I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, ‘almost certainly not’ unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles.
There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is.
An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn’t really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform.
As you seem to know, LLMs are actually extremely simple programs of extremely large matrices with values chosen by the very basic system of gradient descent. Nothing about gradient descent is especially interesting from a consciousness point of view. It’s basically a massive use very simplified ODE solvers in a chain, which are extremely well understood and clearly have no consciousness at all if anything mathematical doesn’t. It could also be viewed as just a very large number of variables in a massive but simple statistical regression. Notably, if gradient descent were related to consciousness directly, we would still have no reason to believe that an LLM doing inference rather than training would be conscious. Simple matrix math also doesn’t seem like much of a candidate for consciousness either.
Someone trying to make the case for consciousness would thus need to think it likely that one of the other mechanisms in LLMs are related to consciousness, but LLMs are actually missing a great many mechanisms that would enable things like self-reflection and awareness (including a number that were included in primitive earlier neural networks such as recursion and internal loops). The people trying to make up for those omissions do a number of things to attempt to recreate it (with ‘attention’ being the built-in one, but also things like adding in the use of previous outputs), but those very simple approaches don’t seem like likely candidates for consciousness (to me).
Thus, it remains extremely unlikely that an LLM is conscious.
When you say we don’t know what mechanisms are used, you seem to be talking about not understanding a completely different thing than I am saying we understand. We don’t understand exactly what each weight means (except in some rare cases that some researchers have seemingly figured out) and why it was chosen to be that rather than any number of other values that would work out similarly, but that is most likely unimportant to my point about mechanisms. This is, as far as I can tell, an actual ambiguity in the meaning of ‘mechanism’ that we can be talking about completely different levels at which mechanisms could operate, and I am talking about the very lowest ones.
Note that I do not usually make a claim about the mechanisms underlying consciousness in general except that it is unlikely to be these extremely basic physical and mathematical ones. I genuinely do not believe that we know enough about consciousness to nail it down to even a small subset of theories. That said, there are still a large number of theories of consciousness that either don’t make internal sense, or seem like components even if part of it.
Pedantically, if consciousness is related to ‘self-modeling’ the implications involve it needing to be internal for the basic reason that it is just ‘modeling’ otherwise. I can’t prove that external modeling isn’t enough for consciousness, (how could I?) but I am unaware of anyone making that contention.
So, would your example be ‘self-modeling’? Your brief sentence isn’t enough for me to be sure what you mean. But if it is related to people’s recent claims related to introspection on this board, then I don’t think so. It would be modeling the external actions of an item that happened to turn out to be itself. For example, if I were to read the life story of a person I didn’t realize was me, and make inferences about how the subject would act under various conditions, that isn’t really self-modeling. On the other hand, in the comments to that, I actually proposed that you could train it on its own internal states, and that could maybe have something to do with this (if self-modeling is true). This is something we do not train current llms on at all though.
As far as I can tell (as someone who finds the very idea of illusionism strange), illusionism is itself not a useful point of view in regards to this dispute, because it would make the question of whether an LLM was conscious pretty moot. Effectively, the answer would be something to the effect of ‘why should I care?’ or ‘no.’ or even ‘to the same extent as people.’ regardless of how an LLM (or ordinary computer program, almost all of which process information heavily) works depending on the mood of the speaker. If consciousness is an illusion, we aren’t talking about anything real, and it is thus useful to ignore illusionism when talking about this question.
As I mentioned before, I do not have a particularly strong theory for what consciousness actually is or even necessarily a vague set of explanations that I believe in more or less strongly.
I can’t say I’ve heard of ‘attention schema theory’ before nor some of the other things you mention next like ‘efference copy’ (but the latter seems to be all about the body which doesn’t seem all that promising a theory for what consciousness may be, though I also can’t rule out that it being part of it since the idea is that it is used in self-modeling which I mentioned earlier I can’t actually rule out either.).
My pet theory of emotions is that they are simply a shorthand for ‘you should react in ways appropriate ways to a situation that is...’ a certain way. For example (and these were not carefully chosen examples) anger would be ‘a fight’, happiness would be ‘very good’, sadness would be ‘very poor’ and so on. And more complicated emotions might obviously include things like it being a good situation but also high stakes. The reason for using a shorthand would be because our conscious mind is very limited in what it can fit at once. Despite this being uncertain, I find this a much more likely than emotions themselves being consciousness.
I would explain things like blindsight (from your ipsundrum link) through having a subconscious mind that gathers information and makes a shorthand before passing it to the rest of the mind (much like my theory of emotions). The shorthand without the actual sensory input could definitely lead to not seeing but being able to use the input to an extent nonetheless. Like you, I see no reason why this should be limited to the one pathway they found in certain creatures (in this case mammals and birds). I certainly can’t rule out that this is related directly to consciousness, but I think it more likely to be another input to consciousness rather than being consciousness.
Side note, I would avoid conflating consciousness and sentience (like the ipsundrum link seems to). Sensory inputs do not seem overly necessary to consciousness, since I can experience things consciously that do not seem related to the senses. I am thus skeptical of the idea that consciousness is built on them. (If I were really expounding my beliefs, I would probably go on a diatribe about the term ‘sentience’ but I’ll spare you that. As much as I dislike sentience based consciousness theories, I would admit them as being theories of consciousness in many cases.)
Again, I can’t rule out global workspace theory, but I am not sure how it is especially useful. What makes a globabl workspace conscious that doesn’t happen in an ordinary computer program I could theoretically program myself? A normal program might take a large number of inputs, process them separately, and then put it all together in a global workspace. It thus seems more like a theory of ‘where does it occur’ than ‘what it is’.
‘Something to do with electrical flows in the brain’ is obviously not very well specified, but it could possibly be meaningful if you mean the way a pattern of electrical flows causes future patterns of electrical flows as distinct from the physical structures the flows travel through.
Biological nerves being the basis of consciousness directly is obviously difficult to evaluate. It seems too simple, and I am not sure whether there is a possibility of having such a tiny amount of consciousness that then add up to our level of consciousness. (I am also unsure about whether there is a spectrum of consciousness beyond the levels known within humans).
I can’t say I would believe a slime mold is conscious (but again, can’t prove it is impossible.) I would probably not believe any simple animals (like ants) are either though even if someone had a good explanation for why their theory says the ant would be. Ants and slime molds still seem more likely to be conscious to me than current LLM style AI though.