GPT-X has a context (of some maximum size), and produces output based on that context. It’s very flexible and can do many different tasks based on what’s in the context.
To what extent is it reasonable to think of (part of) human cognition as being analogous, with working memory playing the role of context?
If I read the first half of a sentence of text, I can make some predictions about the rest, and in fact that prediction happens automatically when I’m prompted with the partial text (but can only happen when I keep the prompt in mind).
If I want to solve some math problem, I first load it into working memory (using my verbal loop and/or mental imagery), and then various avenues forward automatically present themselves.
Certainly, with human cognition there’s some attentional control, and there’s a feedback loop where you modify what’s in working memory as you’re going. So it’s not as simple as just one-prompt, one-response, and then onto the next (unrelated) task. But it does seem to me like one part of human cognition (especially when you’re trying to complete a task) is loading up semi-arbitrary data into working memory and then seeing what more opaque parts of your brain naturally spit out in response.
Does this seem like a reasonable analogy? In what ways is what our brains do with working memory the same as or different from what GPT-X does with context?
GPT-X has a context (of some maximum size), and produces output based on that context. It’s very flexible and can do many different tasks based on what’s in the context.
To what extent is it reasonable to think of (part of) human cognition as being analogous, with working memory playing the role of context?
If I read the first half of a sentence of text, I can make some predictions about the rest, and in fact that prediction happens automatically when I’m prompted with the partial text (but can only happen when I keep the prompt in mind).
If I want to solve some math problem, I first load it into working memory (using my verbal loop and/or mental imagery), and then various avenues forward automatically present themselves.
Certainly, with human cognition there’s some attentional control, and there’s a feedback loop where you modify what’s in working memory as you’re going. So it’s not as simple as just one-prompt, one-response, and then onto the next (unrelated) task. But it does seem to me like one part of human cognition (especially when you’re trying to complete a task) is loading up semi-arbitrary data into working memory and then seeing what more opaque parts of your brain naturally spit out in response.
Does this seem like a reasonable analogy? In what ways is what our brains do with working memory the same as or different from what GPT-X does with context?
In tweet form: https://twitter.com/ESRogs/status/1283138948462555136