I’m not saying that LeCun’s rosy views on AI safety stem solely from his philosophy of mind, but yes, I suspect there is something there.
It seems to me that when he says things like “LLMs don’t display true understanding”, “or true reasoning”, as if there’s some secret sauce to all this that he thinks can only appear in his Jepa architecture or whatever, it seems to me that this is very similar to the same linguistic problems I’ve observed for consciousness.
Surely, if you will discuss with him, he will say things like “No, this is not just a linguistic debate, LLMs cannot reason at all, my cat reasons better”: This surely indicates a linguistic debate.
It seems to me that LeCunis is basically an essentialist of his Jepa architecture, as the main criterion for a neural network to exhibit “reasoning”.
LeCun’s algorithm is something like: “Jepa + Not LLM → Reasoning”.
My algorithm is more something like: “chain-of-thought + can solve complex problem + many other things → reasoning”.
This is very similar to the story I tell for consciousness in the Car Circuit section here.
OK, maybe I understand. If I put it in my own words: You think “consciousness” is just a word denoting a somewhat arbitrary conjunction of cognitive abilities, rather than a distinctive actual thing which people are right or wrong about in varying degrees, and that the hard problem of consciousness results from reifying this conjunction. And you suspect that LeCun in his own thinking e.g. denies that LLMs can reason, because he has added unnecessary extra conditions to his personal definition of “reasoning”.
Regarding your path to eliminativism: I am reminded of my discussion with Carl Feynman last year. I assume you both have subjective experience that is made of qualia from top to bottom, but also have habits of thought that keep you from seeing this as ontologically problematic. In his case, the sense of a problem just doesn’t arise and he has to speculate as to why other people feel it; in your case, you felt the problem, until you decided that an AI civilization might spontaneously develop a spurious concept of phenomenal consciousness.
As for me, I see the problem and I don’t feel a need to un-see it. Physical theory doesn’t contain (e.g.) phenomenal color; reality does; therefore we need a broader theory. The truth is likely to sound strange, e.g. there’s a lattice of natural qubits in the cortex, the Cartesian theater is how the corresponding Hilbert space feels from the inside, and decohered (classical) computation is unconscious and functional only.
in your case, you felt the problem, until you decided that an AI civilization might spontaneously develop a spurious concept of phenomenal consciousness.
I’m not saying that LeCun’s rosy views on AI safety stem solely from his philosophy of mind, but yes, I suspect there is something there.
It seems to me that when he says things like “LLMs don’t display true understanding”, “or true reasoning”, as if there’s some secret sauce to all this that he thinks can only appear in his Jepa architecture or whatever, it seems to me that this is very similar to the same linguistic problems I’ve observed for consciousness.
Surely, if you will discuss with him, he will say things like “No, this is not just a linguistic debate, LLMs cannot reason at all, my cat reasons better”: This surely indicates a linguistic debate.
It seems to me that LeCunis is basically an essentialist of his Jepa architecture, as the main criterion for a neural network to exhibit “reasoning”.
LeCun’s algorithm is something like: “Jepa + Not LLM → Reasoning”.
My algorithm is more something like: “chain-of-thought + can solve complex problem + many other things → reasoning”.
This is very similar to the story I tell for consciousness in the Car Circuit section here.
OK, maybe I understand. If I put it in my own words: You think “consciousness” is just a word denoting a somewhat arbitrary conjunction of cognitive abilities, rather than a distinctive actual thing which people are right or wrong about in varying degrees, and that the hard problem of consciousness results from reifying this conjunction. And you suspect that LeCun in his own thinking e.g. denies that LLMs can reason, because he has added unnecessary extra conditions to his personal definition of “reasoning”.
Regarding LeCun: It strikes me that his best-known argument about the capabilities of LLMs rests on a mathematical claim, that in pure autoregression, the probability of error necessarily grows. He directly acknowledges that if you add chain of thought, it can ameliorate the problem… In his JEPA paper, he discusses what reasoning is, just a little bit. In Kahneman’s language, he calls it a system-2 process, and characterizes it as “simulation plus optimization”.
Regarding your path to eliminativism: I am reminded of my discussion with Carl Feynman last year. I assume you both have subjective experience that is made of qualia from top to bottom, but also have habits of thought that keep you from seeing this as ontologically problematic. In his case, the sense of a problem just doesn’t arise and he has to speculate as to why other people feel it; in your case, you felt the problem, until you decided that an AI civilization might spontaneously develop a spurious concept of phenomenal consciousness.
As for me, I see the problem and I don’t feel a need to un-see it. Physical theory doesn’t contain (e.g.) phenomenal color; reality does; therefore we need a broader theory. The truth is likely to sound strange, e.g. there’s a lattice of natural qubits in the cortex, the Cartesian theater is how the corresponding Hilbert space feels from the inside, and decohered (classical) computation is unconscious and functional only.
This is the best summary of the post currently