I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)
Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won’t be able to eventually correct its errors in the same way humans do?
My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning—it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can’t think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won’t be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
I think humans do symbolic as well as non symbolic reasoning. This is what is often called “hybrid”. I don’t think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.
The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings
BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.
I think your argument is wrong, but interestingly so. I think DL is probably doing symbolic reasoning of a sort, and it sounds like you think it is not (because it makes errors?)
Do you think humans do symbolic reasoning? If so, why do humans make errors? Why do you think a DL system won’t be able to eventually correct its errors in the same way humans do?
My hypothesis is that DL systems are doing a sort of fuzzy finite-depth symbolic reasoning—it has capacity to understand the productions at a surface level and can apply them (subject to contextual clues, in an error-prone way) step by step, but once you ask for sufficient depth it will get confused and fail. Unlike humans, feedforward neural nets can’t think for longer and churn step by step yet; but if someone were to figure out a way to build a looping option into the architecture then I won’t be surprised to see DL systems which can go a lot further on symbolic reasoning than they currently do.
I think humans do symbolic as well as non symbolic reasoning. This is what is often called “hybrid”. I don’t think DL is doing symbolic reasoning, but LeCun is advocating some sort of alternative symbolic systems as you suggest. Errors are a bit of a side issue because both symbolic and non symbolic systems are error prone.
The paradox that I point out is that Python is symbolic, yet DL can mimic its syntax to a very high degree. This shows that DL cannot be informative about the nature of the phenomenon it is mimicking. You could argue that Python is not symbolic. This would obviously be wrong. But people DO use the same argument to show that natural language and cognition is not symbolic. I am saying this could be wrong too. So DL is not uncovering some deep properties of cognition .. it is merely doing some clever statistical mappings
BUT it can only learn the mappings where the symbolic system produces lots of examples, like language. When the symbol system is used for planning, creativity, etc., this is where DL struggles to learn.