I did not use the term ‘self-evident’ and I do not necessarily believe it is self-evident, because theoretically we can’t prove anything isn’t conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it’s that they just clearly aren’t conscious. ‘Almost no reliable evidence’ in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It would require an incredible amount of evidence to make the idea that we should consider that it might be conscious a reasonable one given what we know. If panpsychism is true, then they might be conscious (as would a rock!), but panpsychism is incredibly unlikely.
My dialect does not have the fine distinction between “clear” and “self-evident” on which you seem to be relying; please read “clear” for “self-evident” in order to access my meaning.
Pedantically, ‘self-evident’ and ‘clear’ are different words/phrases, and you should not have emphasized ‘self-evident’ in a way that makes it seem like I used it, regardless of whether you care/make that distinction personally. I then explained why a lack of evidence should be read against the idea that a modern AI is conscious (basically, the prior probability is quite low.)
My emphasis implied you used a term which meant the same thing as self-evident, which in the language I speak, you did. Personally I think the way I use words is the right one and everyone should be more like me; however, I’m willing to settle on the compromise position that we’ll both use words in our own ways. As for the prior probability, I don’t think we have enough information to form a confident prior here.
Do you hold panpsychism as a likely candidate? If not, then you most likely believe the vast majority of things are not conscious. We have a lot of evidence that the way it operates is not meaningfully different in ways we don’t understand from other objects. Thus, almost the entire reference class would be things that are not conscious. If you do believe in panpsychism, then obviously AIs would be too, but it wouldn’t be an especially meaningful statement.
You could choose computer programs as the reference class, but most people are quite sure those aren’t conscious in the vast majority of cases. So what, in the mechanisms underlying an llm is meaningfully different in a way that might cause consciousness? There doesn’t seem to be any likely candidates at a technical level. Thus, we should not increase our prior from that of other computer programs. This does not rule out consciousness, but it does make it rather unlikely.
I can see you don’t appreciate my pedantic points regarding language, but be more careful if you want to say that you are substituting a word for what I used. It is bad communication if it was meant as a translation. It would easily mislead people into thinking I claimed it was ‘self-evident’. I don’t think we can meaningfully agree to use words in our own way if we are actually trying to communicate since that would be self-refuting (as we don’t know what we are agreeing to if the words don’t have a normal meaning).
You in particular clearly find it to be poor communication, but I think the distinction you are making is idiosyncratic to you. I also have strong and idiosyncratic preferences about how to use language, which from the outside view are equally likely to be correct; the best way to resolve this is of course for everyone to recognize that I’m objectively right and adjust their speech accordingly, but I think the practical solution is to privilege neither above the other.
I do think that LLMs are very unlikely to be conscious, but I don’t think we can definitively rule it out.
I am not a panpsychist, but I am a physicalist, and so I hold that thought can arise from inert matter. Animal thought does, and I think other kinds could too. (It could be impossible, of course, but I’m currently aware of no reason to be sure of that). In the absence of a thorough understanding of the physical mechanisms of consciousness, I think there are few mechanisms we can definitively rule out.
Whatever the mechanism turns out to be, however, I believe it will be a mechanism which can be implemented entirely via matter; our minds are built of thoughtless carbon atoms, and so too could other minds be built of thoughtless silicon. (Well, probably; I don’t actually rule out that the chemical composition matters. But like, I’m pretty sure some other non-living substances could theoretically combine into minds.)
You keep saying we understand the mechanisms underlying LLMs, but we just don’t; they’re shaped by gradient descent into processes that create predictions in a fashion almost entirely opaque to us. AIUI there are multiple theories of consciousness under which it could be a process instantiable that way (and, of course, it could be the true theory’s one we haven’t thought of yet). If consciousness is a function of, say, self-modeling (I don’t think this one’s true, just using it as an example) it could plausibly be instantiated simply by training the model in contexts where it must self-model to predict well. If illusionism (which I also disbelieve) is true, perhaps the models already feel the illusion of consciousness whenever they access information internal to them. Et cetera.
As I’ve listed two theories I disbelieve and none I agree with, which strikes me as perhaps discourteous, here are some theories I find not-entirely-implausible. Please note that I’ve given them about five minutes of casual consideration per and could easily have missed a glaring issue.
Attention schema theory, which I heard about just today
‘It could be about having an efference copy’
I heard about a guy who thought it came about from emotions, and therefore was localized in (IIRC) the amygdala (as opposed to the cortex, where it sounded like he thought most people were looking)
Ipsundrums (though I don’t think I buy the bit about it being only mammals and birds in the linked post)
Global workspace theory
[something to do with electrical flows in the brain]
Anything with biological nerves is conscious, if not of very much (not sure what this would imply about other substrates)
Uhh it doesn’t seem impossible that slime molds could be conscious, whatever we have in common with slime molds
Who knows? Maybe every individual cell can experience things. But, like, almost definitely not.
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach.
(I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, ‘almost certainly not’ unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles.
There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is.
An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn’t really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform.
As you seem to know, LLMs are actually extremely simple programs of extremely large matrices with values chosen by the very basic system of gradient descent. Nothing about gradient descent is especially interesting from a consciousness point of view. It’s basically a massive use very simplified ODE solvers in a chain, which are extremely well understood and clearly have no consciousness at all if anything mathematical doesn’t. It could also be viewed as just a very large number of variables in a massive but simple statistical regression. Notably, if gradient descent were related to consciousness directly, we would still have no reason to believe that an LLM doing inference rather than training would be conscious. Simple matrix math also doesn’t seem like much of a candidate for consciousness either.
Someone trying to make the case for consciousness would thus need to think it likely that one of the other mechanisms in LLMs are related to consciousness, but LLMs are actually missing a great many mechanisms that would enable things like self-reflection and awareness (including a number that were included in primitive earlier neural networks such as recursion and internal loops). The people trying to make up for those omissions do a number of things to attempt to recreate it (with ‘attention’ being the built-in one, but also things like adding in the use of previous outputs), but those very simple approaches don’t seem like likely candidates for consciousness (to me).
Thus, it remains extremely unlikely that an LLM is conscious.
When you say we don’t know what mechanisms are used, you seem to be talking about not understanding a completely different thing than I am saying we understand. We don’t understand exactly what each weight means (except in some rare cases that some researchers have seemingly figured out) and why it was chosen to be that rather than any number of other values that would work out similarly, but that is most likely unimportant to my point about mechanisms. This is, as far as I can tell, an actual ambiguity in the meaning of ‘mechanism’ that we can be talking about completely different levels at which mechanisms could operate, and I am talking about the very lowest ones.
Note that I do not usually make a claim about the mechanisms underlying consciousness in general except that it is unlikely to be these extremely basic physical and mathematical ones. I genuinely do not believe that we know enough about consciousness to nail it down to even a small subset of theories. That said, there are still a large number of theories of consciousness that either don’t make internal sense, or seem like components even if part of it.
Pedantically, if consciousness is related to ‘self-modeling’ the implications involve it needing to be internal for the basic reason that it is just ‘modeling’ otherwise. I can’t prove that external modeling isn’t enough for consciousness, (how could I?) but I am unaware of anyone making that contention.
So, would your example be ‘self-modeling’? Your brief sentence isn’t enough for me to be sure what you mean. But if it is related to people’s recent claims related to introspection on this board, then I don’t think so. It would be modeling the external actions of an item that happened to turn out to be itself. For example, if I were to read the life story of a person I didn’t realize was me, and make inferences about how the subject would act under various conditions, that isn’t really self-modeling. On the other hand, in the comments to that, I actually proposed that you could train it on its own internal states, and that could maybe have something to do with this (if self-modeling is true). This is something we do not train current llms on at all though.
As far as I can tell (as someone who finds the very idea of illusionism strange), illusionism is itself not a useful point of view in regards to this dispute, because it would make the question of whether an LLM was conscious pretty moot. Effectively, the answer would be something to the effect of ‘why should I care?’ or ‘no.’ or even ‘to the same extent as people.’ regardless of how an LLM (or ordinary computer program, almost all of which process information heavily) works depending on the mood of the speaker. If consciousness is an illusion, we aren’t talking about anything real, and it is thus useful to ignore illusionism when talking about this question.
As I mentioned before, I do not have a particularly strong theory for what consciousness actually is or even necessarily a vague set of explanations that I believe in more or less strongly.
I can’t say I’ve heard of ‘attention schema theory’ before nor some of the other things you mention next like ‘efference copy’ (but the latter seems to be all about the body which doesn’t seem all that promising a theory for what consciousness may be, though I also can’t rule out that it being part of it since the idea is that it is used in self-modeling which I mentioned earlier I can’t actually rule out either.).
My pet theory of emotions is that they are simply a shorthand for ‘you should react in ways appropriate ways to a situation that is...’ a certain way. For example (and these were not carefully chosen examples) anger would be ‘a fight’, happiness would be ‘very good’, sadness would be ‘very poor’ and so on. And more complicated emotions might obviously include things like it being a good situation but also high stakes. The reason for using a shorthand would be because our conscious mind is very limited in what it can fit at once. Despite this being uncertain, I find this a much more likely than emotions themselves being consciousness.
I would explain things like blindsight (from your ipsundrum link) through having a subconscious mind that gathers information and makes a shorthand before passing it to the rest of the mind (much like my theory of emotions). The shorthand without the actual sensory input could definitely lead to not seeing but being able to use the input to an extent nonetheless. Like you, I see no reason why this should be limited to the one pathway they found in certain creatures (in this case mammals and birds). I certainly can’t rule out that this is related directly to consciousness, but I think it more likely to be another input to consciousness rather than being consciousness.
Side note, I would avoid conflating consciousness and sentience (like the ipsundrum link seems to). Sensory inputs do not seem overly necessary to consciousness, since I can experience things consciously that do not seem related to the senses. I am thus skeptical of the idea that consciousness is built on them. (If I were really expounding my beliefs, I would probably go on a diatribe about the term ‘sentience’ but I’ll spare you that. As much as I dislike sentience based consciousness theories, I would admit them as being theories of consciousness in many cases.)
Again, I can’t rule out global workspace theory, but I am not sure how it is especially useful. What makes a globabl workspace conscious that doesn’t happen in an ordinary computer program I could theoretically program myself? A normal program might take a large number of inputs, process them separately, and then put it all together in a global workspace. It thus seems more like a theory of ‘where does it occur’ than ‘what it is’.
‘Something to do with electrical flows in the brain’ is obviously not very well specified, but it could possibly be meaningful if you mean the way a pattern of electrical flows causes future patterns of electrical flows as distinct from the physical structures the flows travel through.
Biological nerves being the basis of consciousness directly is obviously difficult to evaluate. It seems too simple, and I am not sure whether there is a possibility of having such a tiny amount of consciousness that then add up to our level of consciousness. (I am also unsure about whether there is a spectrum of consciousness beyond the levels known within humans).
I can’t say I would believe a slime mold is conscious (but again, can’t prove it is impossible.) I would probably not believe any simple animals (like ants) are either though even if someone had a good explanation for why their theory says the ant would be. Ants and slime molds still seem more likely to be conscious to me than current LLM style AI though.
I did not use the term ‘self-evident’ and I do not necessarily believe it is self-evident, because theoretically we can’t prove anything isn’t conscious. My more limited claim is not that it is self evident that LLMs are not conscious, it’s that they just clearly aren’t conscious. ‘Almost no reliable evidence’ in favor of consciousness is coupled with the fact that we know how LLMs work (with the details we do not know are probably not important to this matter), and how they work is no more related to consciousness than an ordinary computer program is. It would require an incredible amount of evidence to make the idea that we should consider that it might be conscious a reasonable one given what we know. If panpsychism is true, then they might be conscious (as would a rock!), but panpsychism is incredibly unlikely.
My dialect does not have the fine distinction between “clear” and “self-evident” on which you seem to be relying; please read “clear” for “self-evident” in order to access my meaning.
Pedantically, ‘self-evident’ and ‘clear’ are different words/phrases, and you should not have emphasized ‘self-evident’ in a way that makes it seem like I used it, regardless of whether you care/make that distinction personally. I then explained why a lack of evidence should be read against the idea that a modern AI is conscious (basically, the prior probability is quite low.)
My emphasis implied you used a term which meant the same thing as self-evident, which in the language I speak, you did. Personally I think the way I use words is the right one and everyone should be more like me; however, I’m willing to settle on the compromise position that we’ll both use words in our own ways.
As for the prior probability, I don’t think we have enough information to form a confident prior here.
Do you hold panpsychism as a likely candidate? If not, then you most likely believe the vast majority of things are not conscious. We have a lot of evidence that the way it operates is not meaningfully different in ways we don’t understand from other objects. Thus, almost the entire reference class would be things that are not conscious. If you do believe in panpsychism, then obviously AIs would be too, but it wouldn’t be an especially meaningful statement.
You could choose computer programs as the reference class, but most people are quite sure those aren’t conscious in the vast majority of cases. So what, in the mechanisms underlying an llm is meaningfully different in a way that might cause consciousness? There doesn’t seem to be any likely candidates at a technical level. Thus, we should not increase our prior from that of other computer programs. This does not rule out consciousness, but it does make it rather unlikely.
I can see you don’t appreciate my pedantic points regarding language, but be more careful if you want to say that you are substituting a word for what I used. It is bad communication if it was meant as a translation. It would easily mislead people into thinking I claimed it was ‘self-evident’. I don’t think we can meaningfully agree to use words in our own way if we are actually trying to communicate since that would be self-refuting (as we don’t know what we are agreeing to if the words don’t have a normal meaning).
You in particular clearly find it to be poor communication, but I think the distinction you are making is idiosyncratic to you. I also have strong and idiosyncratic preferences about how to use language, which from the outside view are equally likely to be correct; the best way to resolve this is of course for everyone to recognize that I’m objectively right and adjust their speech accordingly, but I think the practical solution is to privilege neither above the other.
I do think that LLMs are very unlikely to be conscious, but I don’t think we can definitively rule it out.
I am not a panpsychist, but I am a physicalist, and so I hold that thought can arise from inert matter. Animal thought does, and I think other kinds could too. (It could be impossible, of course, but I’m currently aware of no reason to be sure of that). In the absence of a thorough understanding of the physical mechanisms of consciousness, I think there are few mechanisms we can definitively rule out.
Whatever the mechanism turns out to be, however, I believe it will be a mechanism which can be implemented entirely via matter; our minds are built of thoughtless carbon atoms, and so too could other minds be built of thoughtless silicon. (Well, probably; I don’t actually rule out that the chemical composition matters. But like, I’m pretty sure some other non-living substances could theoretically combine into minds.)
You keep saying we understand the mechanisms underlying LLMs, but we just don’t; they’re shaped by gradient descent into processes that create predictions in a fashion almost entirely opaque to us. AIUI there are multiple theories of consciousness under which it could be a process instantiable that way (and, of course, it could be the true theory’s one we haven’t thought of yet). If consciousness is a function of, say, self-modeling (I don’t think this one’s true, just using it as an example) it could plausibly be instantiated simply by training the model in contexts where it must self-model to predict well. If illusionism (which I also disbelieve) is true, perhaps the models already feel the illusion of consciousness whenever they access information internal to them. Et cetera.
As I’ve listed two theories I disbelieve and none I agree with, which strikes me as perhaps discourteous, here are some theories I find not-entirely-implausible. Please note that I’ve given them about five minutes of casual consideration per and could easily have missed a glaring issue.
Attention schema theory, which I heard about just today
‘It could be about having an efference copy’
I heard about a guy who thought it came about from emotions, and therefore was localized in (IIRC) the amygdala (as opposed to the cortex, where it sounded like he thought most people were looking)
Ipsundrums (though I don’t think I buy the bit about it being only mammals and birds in the linked post)
Global workspace theory
[something to do with electrical flows in the brain]
Anything with biological nerves is conscious, if not of very much (not sure what this would imply about other substrates)
Uhh it doesn’t seem impossible that slime molds could be conscious, whatever we have in common with slime molds
Who knows? Maybe every individual cell can experience things. But, like, almost definitely not.
You might believe that the distinctions I make are idiosyncratic, though the meanings are in fact clearly distinct in ordinary usage, but I clearly do not agree with your misleading use of what people would be lead to think are my words and you should take care to not conflate things. You want people to precisely match your own qualifiers in cases where that causes no difference in the meaning of what is said (which makes enough sense), but will directly object to people pointing out a clear miscommunication of yours because you do not care about a difference in meaning. And you are continually asking me to give in on language regardless of how correct I may be while claiming it is better to privilege. That is not a useful approach.
(I take no particular position on physicalism at all.) Since you are a not a panpsychist, you would likely believe that consciousness is not common to the vast majority of things. That means the basic prior for if an item is conscious is, ‘almost certainly not’ unless we have already updated it based on other information. Under what reference class or mechanism should we be more concerned about the consciousness of an LLM than an ordinary computer running ordinary programs? There is nothing that seems particularly likely to lead to consciousness in its operating principles.
There are many people, including the original poster of course, trying to use behavioral evidence to get around that, so I pointed out how weak that evidence is.
An important distinction you seem to not see in my writing (whether because I wrote unclearly or you missed it doesn’t really matter) is that when I speak of knowing the mechanisms by which an llm works is that I mean something very fundamental. We know these two things: 1)exactly what mechanisms are used in order to do the operations involved in executing the program (physically on the computer and mathematically) and 2) the exact mechanisms through which we determine which operations to perform.
As you seem to know, LLMs are actually extremely simple programs of extremely large matrices with values chosen by the very basic system of gradient descent. Nothing about gradient descent is especially interesting from a consciousness point of view. It’s basically a massive use very simplified ODE solvers in a chain, which are extremely well understood and clearly have no consciousness at all if anything mathematical doesn’t. It could also be viewed as just a very large number of variables in a massive but simple statistical regression. Notably, if gradient descent were related to consciousness directly, we would still have no reason to believe that an LLM doing inference rather than training would be conscious. Simple matrix math also doesn’t seem like much of a candidate for consciousness either.
Someone trying to make the case for consciousness would thus need to think it likely that one of the other mechanisms in LLMs are related to consciousness, but LLMs are actually missing a great many mechanisms that would enable things like self-reflection and awareness (including a number that were included in primitive earlier neural networks such as recursion and internal loops). The people trying to make up for those omissions do a number of things to attempt to recreate it (with ‘attention’ being the built-in one, but also things like adding in the use of previous outputs), but those very simple approaches don’t seem like likely candidates for consciousness (to me).
Thus, it remains extremely unlikely that an LLM is conscious.
When you say we don’t know what mechanisms are used, you seem to be talking about not understanding a completely different thing than I am saying we understand. We don’t understand exactly what each weight means (except in some rare cases that some researchers have seemingly figured out) and why it was chosen to be that rather than any number of other values that would work out similarly, but that is most likely unimportant to my point about mechanisms. This is, as far as I can tell, an actual ambiguity in the meaning of ‘mechanism’ that we can be talking about completely different levels at which mechanisms could operate, and I am talking about the very lowest ones.
Note that I do not usually make a claim about the mechanisms underlying consciousness in general except that it is unlikely to be these extremely basic physical and mathematical ones. I genuinely do not believe that we know enough about consciousness to nail it down to even a small subset of theories. That said, there are still a large number of theories of consciousness that either don’t make internal sense, or seem like components even if part of it.
Pedantically, if consciousness is related to ‘self-modeling’ the implications involve it needing to be internal for the basic reason that it is just ‘modeling’ otherwise. I can’t prove that external modeling isn’t enough for consciousness, (how could I?) but I am unaware of anyone making that contention.
So, would your example be ‘self-modeling’? Your brief sentence isn’t enough for me to be sure what you mean. But if it is related to people’s recent claims related to introspection on this board, then I don’t think so. It would be modeling the external actions of an item that happened to turn out to be itself. For example, if I were to read the life story of a person I didn’t realize was me, and make inferences about how the subject would act under various conditions, that isn’t really self-modeling. On the other hand, in the comments to that, I actually proposed that you could train it on its own internal states, and that could maybe have something to do with this (if self-modeling is true). This is something we do not train current llms on at all though.
As far as I can tell (as someone who finds the very idea of illusionism strange), illusionism is itself not a useful point of view in regards to this dispute, because it would make the question of whether an LLM was conscious pretty moot. Effectively, the answer would be something to the effect of ‘why should I care?’ or ‘no.’ or even ‘to the same extent as people.’ regardless of how an LLM (or ordinary computer program, almost all of which process information heavily) works depending on the mood of the speaker. If consciousness is an illusion, we aren’t talking about anything real, and it is thus useful to ignore illusionism when talking about this question.
As I mentioned before, I do not have a particularly strong theory for what consciousness actually is or even necessarily a vague set of explanations that I believe in more or less strongly.
I can’t say I’ve heard of ‘attention schema theory’ before nor some of the other things you mention next like ‘efference copy’ (but the latter seems to be all about the body which doesn’t seem all that promising a theory for what consciousness may be, though I also can’t rule out that it being part of it since the idea is that it is used in self-modeling which I mentioned earlier I can’t actually rule out either.).
My pet theory of emotions is that they are simply a shorthand for ‘you should react in ways appropriate ways to a situation that is...’ a certain way. For example (and these were not carefully chosen examples) anger would be ‘a fight’, happiness would be ‘very good’, sadness would be ‘very poor’ and so on. And more complicated emotions might obviously include things like it being a good situation but also high stakes. The reason for using a shorthand would be because our conscious mind is very limited in what it can fit at once. Despite this being uncertain, I find this a much more likely than emotions themselves being consciousness.
I would explain things like blindsight (from your ipsundrum link) through having a subconscious mind that gathers information and makes a shorthand before passing it to the rest of the mind (much like my theory of emotions). The shorthand without the actual sensory input could definitely lead to not seeing but being able to use the input to an extent nonetheless. Like you, I see no reason why this should be limited to the one pathway they found in certain creatures (in this case mammals and birds). I certainly can’t rule out that this is related directly to consciousness, but I think it more likely to be another input to consciousness rather than being consciousness.
Side note, I would avoid conflating consciousness and sentience (like the ipsundrum link seems to). Sensory inputs do not seem overly necessary to consciousness, since I can experience things consciously that do not seem related to the senses. I am thus skeptical of the idea that consciousness is built on them. (If I were really expounding my beliefs, I would probably go on a diatribe about the term ‘sentience’ but I’ll spare you that. As much as I dislike sentience based consciousness theories, I would admit them as being theories of consciousness in many cases.)
Again, I can’t rule out global workspace theory, but I am not sure how it is especially useful. What makes a globabl workspace conscious that doesn’t happen in an ordinary computer program I could theoretically program myself? A normal program might take a large number of inputs, process them separately, and then put it all together in a global workspace. It thus seems more like a theory of ‘where does it occur’ than ‘what it is’.
‘Something to do with electrical flows in the brain’ is obviously not very well specified, but it could possibly be meaningful if you mean the way a pattern of electrical flows causes future patterns of electrical flows as distinct from the physical structures the flows travel through.
Biological nerves being the basis of consciousness directly is obviously difficult to evaluate. It seems too simple, and I am not sure whether there is a possibility of having such a tiny amount of consciousness that then add up to our level of consciousness. (I am also unsure about whether there is a spectrum of consciousness beyond the levels known within humans).
I can’t say I would believe a slime mold is conscious (but again, can’t prove it is impossible.) I would probably not believe any simple animals (like ants) are either though even if someone had a good explanation for why their theory says the ant would be. Ants and slime molds still seem more likely to be conscious to me than current LLM style AI though.