If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established? How likely is it that this is how the human brain works? Isn’t this also progress towards causal model-building because we now have an easily manipulatable model (causal model with concepts/symbolic relations)? I don’t see how someone can refute that a system like this isn’t truly “understanding” the way humans “understand” (what would Searle say).
Related thought: I have spoken briefly at EA conferences to AI peoples who have spoken about their skepticism for symbolic models and NN+symbolic hybrid models. I’m curious about the reasons for why; any pointers to resources and papers if anyone is short on time, would also be really appreciated. Thank you.
[Question] Partial-Consciousness as semantic/symbolic representational language model trained on NN
If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established? How likely is it that this is how the human brain works? Isn’t this also progress towards causal model-building because we now have an easily manipulatable model (causal model with concepts/symbolic relations)? I don’t see how someone can refute that a system like this isn’t truly “understanding” the way humans “understand” (what would Searle say).
Related thought: I have spoken briefly at EA conferences to AI peoples who have spoken about their skepticism for symbolic models and NN+symbolic hybrid models. I’m curious about the reasons for why; any pointers to resources and papers if anyone is short on time, would also be really appreciated. Thank you.