I think it’s a useful & on-first-read coherent definition; my main suggestion is to not accompany it with ‘AGI’; we already have too many competing standards definitions for AGI for another one to be useful. Holden Karnofsky’s choice to move away from that with ‘Transformative AI’ made it much easier to talk about clearly.
Humans have at least four types
What are the four types? I can imagine breaking that up along different axes.
Here is some rough draft of an expanded version that would try to justify the claims more rather than just stating the definition and the claims.
Here are two starts at detailing those types of memory:
and Learning (alternate draft)
Meaning continuous learning from ongoing experience
Humans have at least four types:
Episodic memory for specific snapshots of cognition
Useful for pinpointing important information, beliefs, and strategies for rapid single-shot learning
Currently implemented as vector-based memories for LLM agents/cognitive architectures;
better implementations are relatively straightforward and should be expected soon
Semantic learning/habit learning
Equivalent to continuously training an LLMs weights with examples from its deployment
Reinforcement learning
Learning not just what happened, but what lead to reward or situations that predict reward
Ongoing RLHF from user feedback is one of many possible straightforward implementations.
Working memory.
Temporary storage
Implemented by short-term persistent activation and short-term weight changes in cortex
Roughly, the context window in LLMs
Curiously, the limitation in both seems to be analogous: it’s not how much you can fit in there, which is a lot, but how much you can usefully get out.
Seven is NOT the correct average estimate of working memory capacity, except for things like phone numbers you can repeat vocally or subvocally.
Four is a more common general estimate, but it’s clear that the measured capacity depends on retreival difficulty from proactive interference.
Most lab tasks use highly overlapping repeated stimuli, eg the same colors or numbers repeated in different patters
Measures of working memory in text comprehension can be interpreted as showing nearly unlimited capacity, probably because the semantics barely overlap among different items.
I know less about benchmarks for retrieval from context windows, but have heard that there are similar limitations on retrieving more than a handful of items appropriately.
Learning (second alternate draft)
Here I mean continuous learning from experiences during deployment. Current LLMs barely do this; they accumulate contnext, but this isn’t quite learning, just using more extensive inputs to make better outputs. GPT4o now makes guesses at important information to put into “memory” (probably just added to the context window/promp), which appears to be fairly useful.
Humans have multiple types of continuous learning. Each could be implemented in conjunction with LLM agents/cognitive architectures, so I expect them to be implemented soonish.
Episodic memory for specific events.
This is a “snapshot” of a moment of cognition
Current vector-based memory in LLM agents does this, but very poorly compared to the human implementation. A more human-like system is relatively straightforward, but explaining why I think that would require explaining how to implement it, which I’m not going to do. Plenty of people understand this, but they tend to not be the people implementing AGI agents.
Updating semantic knowledge continuously
Even if I’m a dense amnesic without any working episodic memory (because my medial temporal lobe has been severely damaged), I’ll be able to use new knowledge if it’s repeated a few times. This is something like adding every user correction to the training set for the next training run, or running training continuously.
I think it’s a useful & on-first-read coherent definition; my main suggestion is to not accompany it with ‘AGI’; we already have too many competing
standardsdefinitions for AGI for another one to be useful. Holden Karnofsky’s choice to move away from that with ‘Transformative AI’ made it much easier to talk about clearly.What are the four types? I can imagine breaking that up along different axes.
Here is some rough draft of an expanded version that would try to justify the claims more rather than just stating the definition and the claims.
Here are two starts at detailing those types of memory:
and Learning (alternate draft)
Meaning continuous learning from ongoing experience
Humans have at least four types:
Episodic memory for specific snapshots of cognition
Useful for pinpointing important information, beliefs, and strategies for rapid single-shot learning
Currently implemented as vector-based memories for LLM agents/cognitive architectures;
better implementations are relatively straightforward and should be expected soon
Semantic learning/habit learning
Equivalent to continuously training an LLMs weights with examples from its deployment
Reinforcement learning
Learning not just what happened, but what lead to reward or situations that predict reward
Ongoing RLHF from user feedback is one of many possible straightforward implementations.
Working memory.
Temporary storage
Implemented by short-term persistent activation and short-term weight changes in cortex
Roughly, the context window in LLMs
Curiously, the limitation in both seems to be analogous: it’s not how much you can fit in there, which is a lot, but how much you can usefully get out.
Seven is NOT the correct average estimate of working memory capacity, except for things like phone numbers you can repeat vocally or subvocally.
Four is a more common general estimate, but it’s clear that the measured capacity depends on retreival difficulty from proactive interference.
Most lab tasks use highly overlapping repeated stimuli, eg the same colors or numbers repeated in different patters
Measures of working memory in text comprehension can be interpreted as showing nearly unlimited capacity, probably because the semantics barely overlap among different items.
I know less about benchmarks for retrieval from context windows, but have heard that there are similar limitations on retrieving more than a handful of items appropriately.
Learning (second alternate draft)
Here I mean continuous learning from experiences during deployment. Current LLMs barely do this; they accumulate contnext, but this isn’t quite learning, just using more extensive inputs to make better outputs. GPT4o now makes guesses at important information to put into “memory” (probably just added to the context window/promp), which appears to be fairly useful.
Humans have multiple types of continuous learning. Each could be implemented in conjunction with LLM agents/cognitive architectures, so I expect them to be implemented soonish.
Episodic memory for specific events.
This is a “snapshot” of a moment of cognition
Current vector-based memory in LLM agents does this, but very poorly compared to the human implementation. A more human-like system is relatively straightforward, but explaining why I think that would require explaining how to implement it, which I’m not going to do. Plenty of people understand this, but they tend to not be the people implementing AGI agents.
Updating semantic knowledge continuously
Even if I’m a dense amnesic without any working episodic memory (because my medial temporal lobe has been severely damaged), I’ll be able to use new knowledge if it’s repeated a few times. This is something like adding every user correction to the training set for the next training run, or running training continuously.