If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established? How likely is it that this is how the human brain works? Isn’t this also progress towards causal model-building because we now have an easily manipulatable model (causal model with concepts/symbolic relations)? I don’t see how someone can refute that a system like this isn’t truly “understanding” the way humans “understand” (what would Searle say).
Related thought: I have spoken briefly at EA conferences to AI peoples who have spoken about their skepticism for symbolic models and NN+symbolic hybrid models. I’m curious about the reasons for why; any pointers to resources and papers if anyone is short on time, would also be really appreciated. Thank you.
If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established?
How? Since the inputs and outputs are completely different spaces, I don’t see how you can hook them up.
So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
Apologies if this is really ridiculous—I’m maybe suggesting ML-related ideas prematurely & having fanciful thoughts. Will be studying ML diligently to help with that.
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
(I’ve only read the abstract of the linked paper.)
If you did something like this with GPT-3, you’d essentially have GPT-3 try to rationalize the actions of the chess engine the way a human would. This feels more like having two separate agents with a particular mode of interaction, rather than a single agent with a connection between symbolic and subsymbolic representations.
(One intuition pump: notice that there isn’t any point where a gradient affects both the GPT-3 weights and the chess engine weights.)
[Question] Partial-Consciousness as semantic/symbolic representational language model trained on NN
If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established? How likely is it that this is how the human brain works? Isn’t this also progress towards causal model-building because we now have an easily manipulatable model (causal model with concepts/symbolic relations)? I don’t see how someone can refute that a system like this isn’t truly “understanding” the way humans “understand” (what would Searle say).
Related thought: I have spoken briefly at EA conferences to AI peoples who have spoken about their skepticism for symbolic models and NN+symbolic hybrid models. I’m curious about the reasons for why; any pointers to resources and papers if anyone is short on time, would also be really appreciated. Thank you.
How? Since the inputs and outputs are completely different spaces, I don’t see how you can hook them up.
So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf
which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
Apologies if this is really ridiculous—I’m maybe suggesting ML-related ideas prematurely & having fanciful thoughts. Will be studying ML diligently to help with that.
(I’ve only read the abstract of the linked paper.)
If you did something like this with GPT-3, you’d essentially have GPT-3 try to rationalize the actions of the chess engine the way a human would. This feels more like having two separate agents with a particular mode of interaction, rather than a single agent with a connection between symbolic and subsymbolic representations.
(One intuition pump: notice that there isn’t any point where a gradient affects both the GPT-3 weights and the chess engine weights.)