I’m not sure what the takeaway is here, but these calculations are highly suspect. What a memory athlete can memorize (in their domain of expertise) in 5 minutes is an intricate mix of working memory and long-term semantic memory, and episodic (hippocampal) memory.
I’m kind of fine with an operationalized version of “working memory” as opposed to a neuroanatomical concept. For practical purposes, it seems more useful to define “working memory” in terms of performance.
(That being said, the model which comes from using such a simplified concept is bad, which I agree is concerning.)
As for the takeaway, for me the one-minute number is interesting both because it’s kind of a lot, but not so much. When I’m puttering around my house balancing tasks such as making coffee, writing on LessWrong, etc I have roughly one objective or idea in conscious view at a time, but the number of tasks and ideas swirling around “in circulation” (being recalled every so often) seems like it can be pretty large. The original idea for this post came from thinking about how psychology tasks like Miller’s seem more liable to underestimate this quantity than overestimate it.
On the other hand, it doesn’t seem so large. Multi-tasking significantly degrades task performance, suggesting that there’s a significant bottleneck.
The “about one minute” estimate fits my intuition: if that’s the amount of practically-applicable information actively swirling around in the brain at a given time (in some operationalized sense), well. It’s interesting that it’s small enough to be easily explained to another person (under the assumption that they share the requisite background knowledge, so there’s no inferential gap when you explain what you’re thinking in your own terms). Yet, it’s also ‘quite a bit’.
I’ve found that “working memory” was coined by Miller, so actually it seems pretty reasonable to apply that term to whatever he was measuring with his experiments, although other definitions seem quite reasonable as well.
Vastly more work has been done since then, including refined definitions of working memory. It measures what he thought he was measuring, so it is following his intent. But it’s still a bit of a chaotic shitshow, and modern techniques are unclear on what they’re measuring and don’t quite match their stated definitions, too.
When I took classes in cog sci, this idea of “working memory” seemed common, despite coexistence with more nuanced models. (IE, speaking about WM as 7±2 chunks was common and done without qualification iirc, although the idea of different memories for different modalities was also discussed. Since this number is determined by experiment, not neuroanatomy, it’s inherently an operationalized concept.) Perhaps this is no longer the case!
I’m kind of fine with an operationalized version of “working memory” as opposed to a neuroanatomical concept. For practical purposes, it seems more useful to define “working memory” in terms of performance.
(That being said, the model which comes from using such a simplified concept is bad, which I agree is concerning.)
As for the takeaway, for me the one-minute number is interesting both because it’s kind of a lot, but not so much. When I’m puttering around my house balancing tasks such as making coffee, writing on LessWrong, etc I have roughly one objective or idea in conscious view at a time, but the number of tasks and ideas swirling around “in circulation” (being recalled every so often) seems like it can be pretty large. The original idea for this post came from thinking about how psychology tasks like Miller’s seem more liable to underestimate this quantity than overestimate it.
On the other hand, it doesn’t seem so large. Multi-tasking significantly degrades task performance, suggesting that there’s a significant bottleneck.
The “about one minute” estimate fits my intuition: if that’s the amount of practically-applicable information actively swirling around in the brain at a given time (in some operationalized sense), well. It’s interesting that it’s small enough to be easily explained to another person (under the assumption that they share the requisite background knowledge, so there’s no inferential gap when you explain what you’re thinking in your own terms). Yet, it’s also ‘quite a bit’.
Why not just makeup a new word about the concept you’re actually talking about?
I’ve found that “working memory” was coined by Miller, so actually it seems pretty reasonable to apply that term to whatever he was measuring with his experiments, although other definitions seem quite reasonable as well.
Vastly more work has been done since then, including refined definitions of working memory. It measures what he thought he was measuring, so it is following his intent. But it’s still a bit of a chaotic shitshow, and modern techniques are unclear on what they’re measuring and don’t quite match their stated definitions, too.
When I took classes in cog sci, this idea of “working memory” seemed common, despite coexistence with more nuanced models. (IE, speaking about WM as 7±2 chunks was common and done without qualification iirc, although the idea of different memories for different modalities was also discussed. Since this number is determined by experiment, not neuroanatomy, it’s inherently an operationalized concept.) Perhaps this is no longer the case!