In retrospect, I was totally unclear that I wan’t necessarily talking about something that has a complicated internal state, such that it can behave like one human over long time scales. I was thinking more about the “minimum human-imitating unit” necessary to get things like IDA off the ground.
In fact this post was originally titled “What to do with a GAN of a human?”
I don’t think you need a complicated internal state to do research. You just need to have read enough research and math to have a good intuition for what definitions, theorems and lemmas will be useful. When I try to come up with insights, my short-term memory context would easily fit into GPT-3′s window.
I feel like my state is significantly more complicated than that. I smoothly accumulate short-term memory and package some of it away into long-term memory, which even more slowly gets packaged away into longer-term memory. GPT-3′s window size would run out the first time I tried to do a literature search and read a few papers, because it doesn’t form memories so easily.
The way actual GPT-3 (or really anything with limited state but lots of training data, I think) gets around this sort of thing is by already having read those papers during training, plus lots of examples of people reacting to papers, and then using context to infer that it should output words that come from someone at a later stage of paper-reading.
Do you foresee a different, more human-like model of humans becoming practical to train?
Misunderstanding: You are talking about literature research, which I do see as part of training. I am talking about original research, which at its best consists of prompts like “This oneliner construction from these four concepts can be elegantly modeled using the concept of ”. The results would of course be integrated into long-term memory using fine-tuning.
In retrospect, I was totally unclear that I wan’t necessarily talking about something that has a complicated internal state, such that it can behave like one human over long time scales. I was thinking more about the “minimum human-imitating unit” necessary to get things like IDA off the ground.
In fact this post was originally titled “What to do with a GAN of a human?”
I don’t think you need a complicated internal state to do research. You just need to have read enough research and math to have a good intuition for what definitions, theorems and lemmas will be useful. When I try to come up with insights, my short-term memory context would easily fit into GPT-3′s window.
I feel like my state is significantly more complicated than that. I smoothly accumulate short-term memory and package some of it away into long-term memory, which even more slowly gets packaged away into longer-term memory. GPT-3′s window size would run out the first time I tried to do a literature search and read a few papers, because it doesn’t form memories so easily.
The way actual GPT-3 (or really anything with limited state but lots of training data, I think) gets around this sort of thing is by already having read those papers during training, plus lots of examples of people reacting to papers, and then using context to infer that it should output words that come from someone at a later stage of paper-reading.
Do you foresee a different, more human-like model of humans becoming practical to train?
Misunderstanding: You are talking about literature research, which I do see as part of training. I am talking about original research, which at its best consists of prompts like “This oneliner construction from these four concepts can be elegantly modeled using the concept of ”. The results would of course be integrated into long-term memory using fine-tuning.