Don’t expect AGI anytime soon
This is a brief follow up to my previous post, The probability that Artificial General Intelligence will be developed by 2043 is Zero, which I think was maybe a bit too long for many people to read. In this post I will show some reactions from some of the top people in AI to my argument as I made it briefly on Twitter.
First Yann LeCun himself, when I reacted to the Browning and LeCun paper I discuss in my previous post:
As you see, LeCun’s response was that the argument is “ridiculous”. The reason, because LeCun can’t win. At least he understands the argument … which is really a proof that his position is wrong because either option he takes to defend it will fail. So instead of trying to defend, he calls the argument “ridiculous”.
In another discussion with Christopher Manning, an influential NLP researcher at Stanford, I debate the plausibility of DL as models of language. As opposed to LeCun, he actually takes my argument seriously, but drops out when I show that his position is not winnable. That is, the fact that “Language Models” learn Python proves that they are not models of language. (The link to the tweets is https://twitter.com/rogerkmoore/status/1530809220744073216?s=20&t=iT9-8JuylpTGgjPiOoyv2A)
The fact is, Python changes everything because we know it works as a classical symbolic system. We don’t know how natural language or human cognition works. Many of us suspect it has components that are classical symbolic processes. Neural network proponents deny this. But they can’t deny that Python is a classical symbolic language. So they must somehow deal with the fact that their models can mimic these processes in some way. And they have no way to prove that the same models are not mimicking human symbolic processes in the same way. My claim is that in both cases the mimicking will take you a long way, but not all the way. DL can learn the mappings where the symbolic system produces lots of examples, like language and Python. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn. I think in ten years everyone will realize this and AI will look pretty silly (again).
In the meantime, we will continue to make progress in many technological areas. Automation will continue to improve. We will have programs that can generate sequences of video to make amazing video productions. Noam Chomsky likens these technological artefacts to bulldozers—if you want to build bulldozers, fine. Nothing wrong with that. We will have amazing bulldozers. But not “intelligent” ones.
Python syntax is closer to natural language, which plays into what the LLMs do best. I don’t think the “symbolic” aspect plays into this in any way, and that kind of misses the argument on symbolic reasoning (that the LLM’s are still just just doing correlation, and have no “grounding” of what the symbols mean, nor does any processing happen at that grounding level).
I’m still confused by your position. Saying DL is capturing symbolic values (in this one case), but also that DL is going to fail (because...?).
So what I am saying is that Python is symbolic, which no one doubts, and that language is also symbolic, which neural network people doubt. That is how the symbolic argument becomes important. Because whatever LLMs do with Python, I suggest they do the same thing with natural language. And whatever they are doing with Python is the wrong thing so I am suggesting what they do with language is also “the wrong thing”.
So what I am saying is that DL is not doing symbolic reasoning with Python or natural language, and will fail in case Python or NL require symbolic reasoning.
I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)
Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won’t be able to eventually correct its errors in the same way humans do?
My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning—it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can’t think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won’t be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
I think humans do symbolic as well as non symbolic reasoning. This is what is often called “hybrid”. I don’t think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.
The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings
BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.
my read was “we’ve already got models as strong as they’re going to get, and they’re not agi”. I disagree that they’re as strong as they’re going to get.
No I didn’t say they are as strong as they are going to get. But they are strong enough to do some Python, which shows that neural Networks can make a symbolic language look as though it wasn’t one. IN other words they have no value in revealing anything about the underlying nature of Python, or language (my claim).