I feel like my state is significantly more complicated than that. I smoothly accumulate short-term memory and package some of it away into long-term memory, which even more slowly gets packaged away into longer-term memory. GPT-3′s window size would run out the first time I tried to do a literature search and read a few papers, because it doesn’t form memories so easily.
The way actual GPT-3 (or really anything with limited state but lots of training data, I think) gets around this sort of thing is by already having read those papers during training, plus lots of examples of people reacting to papers, and then using context to infer that it should output words that come from someone at a later stage of paper-reading.
Do you foresee a different, more human-like model of humans becoming practical to train?
Misunderstanding: You are talking about literature research, which I do see as part of training. I am talking about original research, which at its best consists of prompts like “This oneliner construction from these four concepts can be elegantly modeled using the concept of ”. The results would of course be integrated into long-term memory using fine-tuning.
I feel like my state is significantly more complicated than that. I smoothly accumulate short-term memory and package some of it away into long-term memory, which even more slowly gets packaged away into longer-term memory. GPT-3′s window size would run out the first time I tried to do a literature search and read a few papers, because it doesn’t form memories so easily.
The way actual GPT-3 (or really anything with limited state but lots of training data, I think) gets around this sort of thing is by already having read those papers during training, plus lots of examples of people reacting to papers, and then using context to infer that it should output words that come from someone at a later stage of paper-reading.
Do you foresee a different, more human-like model of humans becoming practical to train?
Misunderstanding: You are talking about literature research, which I do see as part of training. I am talking about original research, which at its best consists of prompts like “This oneliner construction from these four concepts can be elegantly modeled using the concept of ”. The results would of course be integrated into long-term memory using fine-tuning.