If you hook up a language model like GPT-3 to a chess engine or some other NN model, isn’t a tie from semantic/symbolic level representation (words and sentences that are coherent and understandable) to distributed, subsymbolic representations in NNs being established?
How? Since the inputs and outputs are completely different spaces, I don’t see how you can hook them up.
So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
Apologies if this is really ridiculous—I’m maybe suggesting ML-related ideas prematurely & having fanciful thoughts. Will be studying ML diligently to help with that.
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
(I’ve only read the abstract of the linked paper.)
If you did something like this with GPT-3, you’d essentially have GPT-3 try to rationalize the actions of the chess engine the way a human would. This feels more like having two separate agents with a particular mode of interaction, rather than a single agent with a connection between symbolic and subsymbolic representations.
(One intuition pump: notice that there isn’t any point where a gradient affects both the GPT-3 weights and the chess engine weights.)
How? Since the inputs and outputs are completely different spaces, I don’t see how you can hook them up.
So, I thought it would be a neat proof of concept if GPT3 served as a bridge between something like a chess engine’s actions and verbal/semantic level explanations of its goals (so that the actions are interpretable by humans). e.g. bishop to g5; this develops a piece and pins the knight to the king, so you can add additional pressure to the pawn on d5 (or something like this).
In response, Reiichiro Nakano shared this paper: https://arxiv.org/pdf/1901.03729.pdf
which kinda shows it’s possible to have agent state/action representations in natural language for Frogger. There are probably glaring/obvious flaws with my OP, but this was what inspired those thoughts.
Apologies if this is really ridiculous—I’m maybe suggesting ML-related ideas prematurely & having fanciful thoughts. Will be studying ML diligently to help with that.
(I’ve only read the abstract of the linked paper.)
If you did something like this with GPT-3, you’d essentially have GPT-3 try to rationalize the actions of the chess engine the way a human would. This feels more like having two separate agents with a particular mode of interaction, rather than a single agent with a connection between symbolic and subsymbolic representations.
(One intuition pump: notice that there isn’t any point where a gradient affects both the GPT-3 weights and the chess engine weights.)