Is the claim that before I learn some new thing, each of my working memory slots is just a single word that I already know? Because I’m pretty sure that’s not true.
First: the epistemic status of this whole convo is “thing Ray is still thinking through and is not very sure about.”
Two, for your specific question: No, my claim is that wordspace is a (mostly) subset of chunkspace, not the other way round. My claim is something like “words are chunks that you’ve given a name”, but you can think in chunks that have not been given names.
Three: I’m not taking that claim literally, I’m just sorta trying it out to see if it fits, and where it fails. I’m guessing it’ll fail somewhere but I’m not actually sure where yet. If you can point to a concrete way that it fails to make sense that’d be helpful.
But, insofar as I’m running with this idea:
An inventor who is coming up with a new thing might be working entirely with wordless chunks, that they invent, combine them into bigger ideas, compress into smaller chunks, without ever being verbalized or given word form.
might be working entirely with wordless chunks, that they invent, combine them into bigger ideas, compress into smaller chunks, without ever being verbalized or given word form.
This part points pretty directly at research debt and inferential distance, where the debt is how many of these chunks need to be named and communicated as chunks, and the distance is how many re-chunking steps need to be done.
Thinking a little more: I think when I’m parsing a written sentence, words are closer like one-word-to-one-chunk correspondence. When I’m thinking them, I think groups of words tend to be more like a chunk. “Politics is the mind killer” might collapse into a single slot that I’m not looking at at super-high resolution, allowing me to reason something like “‘Politics is the mindkiller’ is an incomplete idea.’”
If wordspace is a subset of chunkspace and not the other way around, and you have about five chunks, do you agree that you do not have about five words, but rather more?
Yes, although I’ve heard mixed things about how many chunks you actually have, and that the number might be more like 4.
Also, the ideas often get propagated in conjunction with other ideas. I.e. people don’t just say “politics is the mindkiller”, they say “politics is the mindkiller, therefore X” (where X is whatever point they’re making in the conversation). And that sentence is bottlenecked on total comprehensibility. So, basically the more chunks you’re using up with your core idea, the more you’re at the mercy of other people truncating it when they need to fit other ideas in.
I’d argue “politics is the mindkiller” is two chunks initially, because people parse “is” and “the” somewhat intuitively or fill them in. Whereas Avoid Unnecessary Political Arguments is more like 4 chunks. I think you typically need at least 2 chunks to say something meaningful, although maybe not always.
Once something becomes popular it can eventually compress down to 1 chunk. But, also, I think “sentence complexity” is not only bottlenecked on chunks. “Politics is the mindkiller” can be conceptually one chunk, but it still takes a bunch of visual or verbal space up while parsing a sentence that makes it harder to read if it’s only one clause in a multi-step argument. I’m not 100% sure if this is secretly still an application of working memory, or if it’s a different issue.
I’m wondering how Gendlin Focusing interacts with working memory.
I think the first phase of focusing is pre-chunk, as well as pre-verbal. You’re noticing a bunch of stuff going on in your body. It’s more of a sensation than a thought.
The process of focusing is trying to get those sensations into a form your brain can actually work with and think about.
I… notice that focusing takes basically all my concentration. I think at some part of the process it’s using working memory (and basically all of my working memory). But I’m not sure when that is.
One of the things you do in focusing is try to give your felt-sense a bunch of names and see if they fit, and notice the dissonance. I think when this process starts, the felt-sense is not stored in chunk form. I think as I try to give it different names
Gendlin Focusing might be a process where
a) first I’m trying to feel out a bunch of felt-data that isn’t even in chunk form yet
b) I sort of feel it out, while trying different word-combos on it. Meanwhile it’s getting more solid in my head. I think it’s… slowly transitioning from wordless non-chunks into wordless chunks, and then when I finally find the right name that describes it I’m like “ah, that’s it”, and then it simultaneiously solidifies into one-or-more chunks I can store properly in working memory, and also gets a name. (The name might be multiple words, and depending on context those words could correspond to one chunk or multiple)
Not about Gendlin, but following the trail of relating chunks to other things: I wonder if propaganda or cult indoctrination can be described as a malicious chunking process.
I’ve weighed in against taking the numbers literally elsewhere, but following this thread I suddenly wondered if the work that using few words was doing isn’t delivering the chunk, but rather screening out any alternative chunk. If what we are interested in is common knowledge, it isn’t getting people to develop a chunk per se that is the challenge; rather everyone has to agree on exactly which chunk everyone else is using. This sounds much more like the work of a filter than a generator.
When I thought about it in those terms, it occurred to me that it is perfectly possible to drive this in any direction at all; we aren’t even meaningfully constrained by reality. This feels obvious in retrospect—there’ve been lots of times when common knowledge was utterly wrong—but doing that on purpose never occurred to me.
So now it feels like what cults do, and why they sound so weird to everyone outside of them, is deliberately create a different sequence of chunks for normal things for the purpose of having different chunks. Once that is done, the availability heuristic will sustain communication on that basis, and the artificially-induced inferential distance will tend to isolate them from anyone outside the group.
Is the claim that before I learn some new thing, each of my working memory slots is just a single word that I already know? Because I’m pretty sure that’s not true.
First: the epistemic status of this whole convo is “thing Ray is still thinking through and is not very sure about.”
Two, for your specific question: No, my claim is that wordspace is a (mostly) subset of chunkspace, not the other way round. My claim is something like “words are chunks that you’ve given a name”, but you can think in chunks that have not been given names.
Three: I’m not taking that claim literally, I’m just sorta trying it out to see if it fits, and where it fails. I’m guessing it’ll fail somewhere but I’m not actually sure where yet. If you can point to a concrete way that it fails to make sense that’d be helpful.
But, insofar as I’m running with this idea:
An inventor who is coming up with a new thing might be working entirely with wordless chunks, that they invent, combine them into bigger ideas, compress into smaller chunks, without ever being verbalized or given word form.
This part points pretty directly at research debt and inferential distance, where the debt is how many of these chunks need to be named and communicated as chunks, and the distance is how many re-chunking steps need to be done.
Thinking a little more: I think when I’m parsing a written sentence, words are closer like one-word-to-one-chunk correspondence. When I’m thinking them, I think groups of words tend to be more like a chunk. “Politics is the mind killer” might collapse into a single slot that I’m not looking at at super-high resolution, allowing me to reason something like “‘Politics is the mindkiller’ is an incomplete idea.’”
If wordspace is a subset of chunkspace and not the other way around, and you have about five chunks, do you agree that you do not have about five words, but rather more?
Yes, although I’ve heard mixed things about how many chunks you actually have, and that the number might be more like 4.
Also, the ideas often get propagated in conjunction with other ideas. I.e. people don’t just say “politics is the mindkiller”, they say “politics is the mindkiller, therefore X” (where X is whatever point they’re making in the conversation). And that sentence is bottlenecked on total comprehensibility. So, basically the more chunks you’re using up with your core idea, the more you’re at the mercy of other people truncating it when they need to fit other ideas in.
I’d argue “politics is the mindkiller” is two chunks initially, because people parse “is” and “the” somewhat intuitively or fill them in. Whereas Avoid Unnecessary Political Arguments is more like 4 chunks. I think you typically need at least 2 chunks to say something meaningful, although maybe not always.
Once something becomes popular it can eventually compress down to 1 chunk. But, also, I think “sentence complexity” is not only bottlenecked on chunks. “Politics is the mindkiller” can be conceptually one chunk, but it still takes a bunch of visual or verbal space up while parsing a sentence that makes it harder to read if it’s only one clause in a multi-step argument. I’m not 100% sure if this is secretly still an application of working memory, or if it’s a different issue.
Continuing to babble down this thought-trail:
I’m wondering how Gendlin Focusing interacts with working memory.
I think the first phase of focusing is pre-chunk, as well as pre-verbal. You’re noticing a bunch of stuff going on in your body. It’s more of a sensation than a thought.
The process of focusing is trying to get those sensations into a form your brain can actually work with and think about.
I… notice that focusing takes basically all my concentration. I think at some part of the process it’s using working memory (and basically all of my working memory). But I’m not sure when that is.
One of the things you do in focusing is try to give your felt-sense a bunch of names and see if they fit, and notice the dissonance. I think when this process starts, the felt-sense is not stored in chunk form. I think as I try to give it different names
Gendlin Focusing might be a process where
a) first I’m trying to feel out a bunch of felt-data that isn’t even in chunk form yet
b) I sort of feel it out, while trying different word-combos on it. Meanwhile it’s getting more solid in my head. I think it’s… slowly transitioning from wordless non-chunks into wordless chunks, and then when I finally find the right name that describes it I’m like “ah, that’s it”, and then it simultaneiously solidifies into one-or-more chunks I can store properly in working memory, and also gets a name. (The name might be multiple words, and depending on context those words could correspond to one chunk or multiple)
Not about Gendlin, but following the trail of relating chunks to other things: I wonder if propaganda or cult indoctrination can be described as a malicious chunking process.
I’ve weighed in against taking the numbers literally elsewhere, but following this thread I suddenly wondered if the work that using few words was doing isn’t delivering the chunk, but rather screening out any alternative chunk. If what we are interested in is common knowledge, it isn’t getting people to develop a chunk per se that is the challenge; rather everyone has to agree on exactly which chunk everyone else is using. This sounds much more like the work of a filter than a generator.
When I thought about it in those terms, it occurred to me that it is perfectly possible to drive this in any direction at all; we aren’t even meaningfully constrained by reality. This feels obvious in retrospect—there’ve been lots of times when common knowledge was utterly wrong—but doing that on purpose never occurred to me.
So now it feels like what cults do, and why they sound so weird to everyone outside of them, is deliberately create a different sequence of chunks for normal things for the purpose of having different chunks. Once that is done, the availability heuristic will sustain communication on that basis, and the artificially-induced inferential distance will tend to isolate them from anyone outside the group.