i googled it just now bc i wanted to find a wikipedia article i read ~9 years ago mentioning “deconcentration of attention”, and this LW post came up. odd.
anyway, i first found mention of it via a blue-link on the page for Ithkuil. they’ve since changed smth, but this snippet remains:
After a mention of Ithkuil in the Russian magazine Computerra, several speakers of Russian contacted Quijada and expressed enthusiasm to learn Ithkuil for its application to psychonetics—
deconcentration of attention
i wanted to look it up bc it relates to smth i tweeted abt yesterday:
unique how the pattern is only visible when you don’t look at it. i wonder what other kind of stuff is like that. like, maybe a life-problem that’s only visible to intuition, and if you try to zoom in to rationally understand it, you find there’s no problem after all?
oh.
i notice that relaxing my attention sometimes works when eg i’m trying to recall smth at the limit of my memory (or when it’s stuck on my tongue). sorta like broadening my attentional field to connect widely distributed patterns. another frame on it is that it enables anabranching trains of thought. (ht TsviBT for the word & concept)
An anabranch is a section of a river or stream that diverts from the main channel or stem of the watercourse and rejoins the main stem downstream.
here’s my model for why it works:
(update: i no longer endorse this model; i think the whole framework of serial loops is bad, and think everything can be explained without it. still, there are parts of the below explanation that don’t depend on it, and it was a productive mistake to make.)
Working Memory is a loop of information (parts of the chewbacca-loop is tentatively my prime suspect for this). it’s likely not a fully synchronised clock-cycle, but my guess is that whenever you combine two concepts in WM, their corresponding neural ensembles undergo harmonic locking to remain there.[1]
every iteration, information in the loop is a weighted combination of:
stuff that’s already in working memory
new stuff (eg memories) that reaches salience due to sufficient association with stuff from the previous iteration of WM
new stuff from sensory networks (eg sights, sounds) that wasn’t automatically filtered out by top-down predictions
for new information (B or C) to get into the loop, it has to exceed a bottom-up threshold for salience.
the salience network (pictured below) determines the weighting between the channels (A, B, C), and/or the height of their respective salience thresholds. (both are ways to achieve the same thing, and i’m unsure which frame is more better.)
“concentrating hard” on trying to recall smth has the effect of silencing the flow of information from B & C, such that the remaining salience is normalised exclusively over stuff in A. iow, it narrows the flow of new information into WM.
(bonus point: this is what “top-down attention” is. it’s not “reach-out-and-grab” as it may intuitively feel like. instead, it’s a process where the present weighted combination of items in WM determines (allocates/partitions) salience between items in WM.)
this is a tradeoff, however. if you narrow all salience towards eg a specific top-down query ↓Q, this has smth like the following two effects:
you make it easier to detect potential answers ↑Q by reducing the weight of unrelated competing noise
but you also heighten the salience threshold↑Q must exceed to reach you
in light of this, here some tentative takeaways:
if your WM already contains sufficient information to triangulate towards the item you’re looking for, and the recollection/insight is bottlenecked by competing noise, concentrate harder.
but if WM doesn’t have sufficient information, concentrating could prematurely block essential cues that don’t yet strongly associate from ↓Q directly.
and in cases where features in ↓Q itself are temporarily interfering w the recollection, globally narrowing or broadening concentration may not unblock it. instead, consider pausing for a bit and try to find alternative ways to ask Q.
Ithkuil
Natural languages are adequate, but that doesn’t mean they’re optimal. — John Quijada
i’m a fan of Quijada (eg this lecture) and his intensely modular & cognitive-linguistics-inspired conlang, Ithkuil.
that said, i don’t think it sufficiently captures the essences of what enables language to be an efficient tool for thought. LW has a wealth of knowledge about that in particular, so i’m sad conlanging (and linguistics in general) hasn’t received more attention here. it may not be that hard, EMH doesn’t apply when ~nobody’s tried.
We can think of a bunch of ideas that we like, and then check whether [our language can adequately] express each idea. We will almost always find that [it is]. To conclude from this that we have an adequate [language] in general, would [be silly]. — The possible shared Craft of Deliberate Lexicogenesis (freely interpreted)
Furthermore, a relationship with task performance was evident, indicating that an increased occurrence of harmonic locking (i.e., transient 2:1 ratios) was associated with improved arithmetic performance. These results are in line with previous evidence pointing to the importance of alpha–theta interactions in tasks requiring working memory and executive control. (Julio & Kaat, 2019)
i googled it just now bc i wanted to find a wikipedia article i read ~9 years ago mentioning “deconcentration of attention”, and this LW post came up. odd.
anyway, i first found mention of it via a blue-link on the page for Ithkuil. they’ve since changed smth, but this snippet remains:
deconcentration of attention
i wanted to look it up bc it relates to smth i tweeted abt yesterday:
i notice that relaxing my attention sometimes works when eg i’m trying to recall smth at the limit of my memory (or when it’s stuck on my tongue). sorta like broadening my attentional field to connect widely distributed patterns. another frame on it is that it enables anabranching trains of thought. (ht TsviBT for the word & concept)
here’s my model for why it works:
(update: i no longer endorse this model; i think the whole framework of serial loops is bad, and think everything can be explained without it. still, there are parts of the below explanation that don’t depend on it, and it was a productive mistake to make.)
Working Memory is a loop of information (parts of the chewbacca-loop is tentatively my prime suspect for this). it’s likely not a fully synchronised clock-cycle, but my guess is that whenever you combine two concepts in WM, their corresponding neural ensembles undergo harmonic locking to remain there.[1]
every iteration, information in the loop is a weighted combination of:
stuff that’s already in working memory
new stuff (eg memories) that reaches salience due to sufficient association with stuff from the previous iteration of WM
new stuff from sensory networks (eg sights, sounds) that wasn’t automatically filtered out by top-down predictions
for new information (B or C) to get into the loop, it has to exceed a bottom-up threshold for salience.
the salience network (pictured below) determines the weighting between the channels (A, B, C), and/or the height of their respective salience thresholds. (both are ways to achieve the same thing, and i’m unsure which frame is more better.)
“concentrating hard” on trying to recall smth has the effect of silencing the flow of information from B & C, such that the remaining salience is normalised exclusively over stuff in A. iow, it narrows the flow of new information into WM.
(bonus point: this is what “top-down attention” is. it’s not “reach-out-and-grab” as it may intuitively feel like. instead, it’s a process where the present weighted combination of items in WM determines (allocates/partitions) salience between items in WM.)
this is a tradeoff, however. if you narrow all salience towards eg a specific top-down query ↓Q, this has smth like the following two effects:
you make it easier to detect potential answers ↑Q by reducing the weight of unrelated competing noise
but you also heighten the salience threshold ↑Q must exceed to reach you
in light of this, here some tentative takeaways:
if your WM already contains sufficient information to triangulate towards the item you’re looking for, and the recollection/insight is bottlenecked by competing noise, concentrate harder.
but if WM doesn’t have sufficient information, concentrating could prematurely block essential cues that don’t yet strongly associate from ↓Q directly.
and in cases where features in ↓Q itself are temporarily interfering w the recollection, globally narrowing or broadening concentration may not unblock it. instead, consider pausing for a bit and try to find alternative ways to ask Q.
Ithkuil
i’m a fan of Quijada (eg this lecture) and his intensely modular & cognitive-linguistics-inspired conlang, Ithkuil.
that said, i don’t think it sufficiently captures the essences of what enables language to be an efficient tool for thought. LW has a wealth of knowledge about that in particular, so i’m sad conlanging (and linguistics in general) hasn’t received more attention here. it may not be that hard, EMH doesn’t apply when ~nobody’s tried.