The term is agnostic about what exactly the physical thing is that’s being selected. It just refers to whatever is implementing a neural function and is selected as a unit.
So depending on use-case, a “neureme” can semantically resolve to a single neuron, a collection of neurons, a neural ensemble/assembly/population-vector/engram, a set of ensembles, a frequency, or even dendritic substructure if that plays a role.
For every activity you’re engaged with, there are certain neuremes responsible for specializing at those tasks.
These neuremes are strengthened or weakened/changed in proportion to how effectively they can promote themselves to your attention.
“Attending to” assemblies of neurons means that their firing-rate maxes out (gamma frequency), and their synapses are flushed with acetylcholine, which is required for encoding memories and queuing them for consolidation during sleep.
So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.
Note that there’s hereditary selection going on at the level of genes, and at the level of neuremes. But since genes adapt much slower, the primary selection-pressures neuremes adapt to arise from short-term inter-neuronal competitions. Genes are limited to optimizing the general structure of those competitions, but they can only do so in very broad strokes, so there’s lots of genetically-misaligned neuronal competition going on.
A corollary of this is that neuremes are stuck in a tragedy of the commons: If all neuremes “agreed to” never develop any misaligned mechanisms for keeping themselves in attention—and we assume this has no effect on the relative proportion of attention they receive—then their relative fitness would stay constant at a lower metabolic cost overall. But since no such agreement can be made, there’s some price of anarchy wrt the cost-efficiency of neuremes.
Thus, whenever some neuremes uniquely associated with a cognitive state are *dominant* in attention, whatever mechanisms they’ve evolved for persisting the state are going to be at maximum power, and this is what makes the brain reluctant to gain perspective when on stimulants.
A technique for making the brain trust prioritization/perspectivization
So, in conclusion, maybe this technique could work:
If I feel like my brain is sucking me into an unproductive rabbit-hole, set a timer for 60 seconds during which I can check my todo-list and prioritize what I ought to do next.
But, before the end of that timer, I will have set another timer (e.g. 10 min) during which I commit to the previous task before I switch to whatever I decided.
The hope is that my brain learns to trust that gaining perspective doesn’t automatically mean we have to abandon the present task, and this means it can spend less energy on inhibiting signals that try to gain perspective.
By experience, I know something like this has worked for:
Making me trust my task-list
When my brain trusts that all my tasks are in my todo-list, and that I will check my todo-list every day, it no longer bothers reminding me about stuff at random intervals.
Reducing dystonic distractions
When I deliberately schedule stuff I want to do less (e.g. masturbation, cooking, twitter), and committing to actually *do* those things when scheduled, my brain learns to trust that, and stops bothering me with the desires when they’re not scheduled.
So it seems likely that something in this direction could work, even if this particular technique fails.
The “-eme” suffix inherits from “emic unit”, e.g. genes, memes, sememes, morphemes, lexemes, etc. It refers to the minimum indivisible things that compose to serve complex functions. The important notion here is that even if the eme has complex substructure, all its components are selected as a unit, which means that all subfunctions hitchhikeon the net fitness of all other subfunctions.
Bonus point: neuronal “voting power” is capped at around ~100Hz, so neurons “have an incentive” (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It’s analogous to a winner-takes-all-election where you don’t want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.
That’s why global/conscious “narratives” are essential in the brain—they’re metabolically efficient Schelling-points.
Neuron-voters needn’t “make predictions” like human-voters do. It just needs to be the case that their stability is proportional to their ability to “act as if” they predicted other neurons’ predictions (and so on).
I messed up. I meant to comment on another comment of yours, the one replying to niplav’s post about fat tails disincentivizing compromise. That was the one I really wished I could bookmark.
I think hastening of subgoal completion[1] is some evidence for the notion that competitive inter-neuronal selection pressures are frequently misaligned with genetic fitness. People (me included) routinely choose to prioritize completing small subtasks in order to reduce cognitive load, even when that strategy predictably costs more net metabolic energy. (But I can think of strong counterexamples.)
The same pattern one meta-level up is “intragenomic conflict”[2], where genetic lineages have had to spend significant selection-power to prevent genes from fighting dirty. For example, the mechanism of meiosis itself may largely be maintained in equilibrium due to the longer-term necessity of preventing stuff like meiotic drives. An allele (or a collusion of them) which successfwly transfer to offspring at a probability of >50%, may increase their relativefitness even if it marginally reduces their phenotype’s viability.
My generalized term for this is “intra-emic conflict” (pinging the concept of an “eme” as defined in the above comment).
We asked university students to pick up either of two buckets, one to the left of an alley and one to the right, and to carry the selected bucket to the alley’s end. In most trials, one of the buckets was closer to the end point. We emphasized choosing the easier task, expecting participants to prefer the bucket that would be carried a shorter distance. Contrary to our expectation, participants chose the bucket that was closer to the start position, carrying it farther than the other bucket. — Pre-Crastination: Hastening Subgoal Completion at the Expense of Extra Physical Effort
Intragenomic conflict refers to the evolutionary phenomenon where genes have phenotypic effects that promote their own transmission in detriment of the transmission of other genes that reside in the same genome.
So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.
Furthermore, the neuremes (association-clusters) you are currently attending to, have an incentive to recruit associated neuremes into attention as well, because then they feed each others’ activity recursively, and can dominate attention for longer. I think of it like association-clusters feeding activity into their “friends” who are most likely to reciprocate.
And because recursive connections between association-clusters tend to reflect some ground truth about causal relationships in the territory, this tends to be highly effective as a mechanism for inference. But there must be edge-cases (though I can’t recall any atm...).
Imagining agentic behaviour in (/taking intentional stance wrt) individual brain-units is great for generating high-level hypotheses about mechanisms, but obviously misfires and don’t try this at home etc etc.
Selfish neuremes adapt to prevent you from reprioritizing
“Neureme” is my most general term for units of selection in the brain.[1]
The term is agnostic about what exactly the physical thing is that’s being selected. It just refers to whatever is implementing a neural function and is selected as a unit.
So depending on use-case, a “neureme” can semantically resolve to a single neuron, a collection of neurons, a neural ensemble/assembly/population-vector/engram, a set of ensembles, a frequency, or even dendritic substructure if that plays a role.
For every activity you’re engaged with, there are certain neuremes responsible for specializing at those tasks.
These neuremes are strengthened or weakened/changed in proportion to how effectively they can promote themselves to your attention.
“Attending to” assemblies of neurons means that their firing-rate maxes out (gamma frequency), and their synapses are flushed with acetylcholine, which is required for encoding memories and queuing them for consolidation during sleep.
So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.
Note that there’s hereditary selection going on at the level of genes, and at the level of neuremes. But since genes adapt much slower, the primary selection-pressures neuremes adapt to arise from short-term inter-neuronal competitions. Genes are limited to optimizing the general structure of those competitions, but they can only do so in very broad strokes, so there’s lots of genetically-misaligned neuronal competition going on.
A corollary of this is that neuremes are stuck in a tragedy of the commons: If all neuremes “agreed to” never develop any misaligned mechanisms for keeping themselves in attention—and we assume this has no effect on the relative proportion of attention they receive—then their relative fitness would stay constant at a lower metabolic cost overall. But since no such agreement can be made, there’s some price of anarchy wrt the cost-efficiency of neuremes.
Thus, whenever some neuremes uniquely associated with a cognitive state are *dominant* in attention, whatever mechanisms they’ve evolved for persisting the state are going to be at maximum power, and this is what makes the brain reluctant to gain perspective when on stimulants.
A technique for making the brain trust prioritization/perspectivization
So, in conclusion, maybe this technique could work:
If I feel like my brain is sucking me into an unproductive rabbit-hole, set a timer for 60 seconds during which I can check my todo-list and prioritize what I ought to do next.
But, before the end of that timer, I will have set another timer (e.g. 10 min) during which I commit to the previous task before I switch to whatever I decided.
The hope is that my brain learns to trust that gaining perspective doesn’t automatically mean we have to abandon the present task, and this means it can spend less energy on inhibiting signals that try to gain perspective.
By experience, I know something like this has worked for:
Making me trust my task-list
When my brain trusts that all my tasks are in my todo-list, and that I will check my todo-list every day, it no longer bothers reminding me about stuff at random intervals.
Reducing dystonic distractions
When I deliberately schedule stuff I want to do less (e.g. masturbation, cooking, twitter), and committing to actually *do* those things when scheduled, my brain learns to trust that, and stops bothering me with the desires when they’re not scheduled.
So it seems likely that something in this direction could work, even if this particular technique fails.
The “-eme” suffix inherits from “emic unit”, e.g. genes, memes, sememes, morphemes, lexemes, etc. It refers to the minimum indivisible things that compose to serve complex functions. The important notion here is that even if the eme has complex substructure, all its components are selected as a unit, which means that all subfunctions hitchhike on the net fitness of all other subfunctions.
This comment is making me wish I could bookmark comments on LW. @habryka,
Bonus point: neuronal “voting power” is capped at around ~100Hz, so neurons “have an incentive” (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It’s analogous to a winner-takes-all-election where you don’t want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.
That’s why global/conscious “narratives” are essential in the brain—they’re metabolically efficient Schelling-points.
Neuron-voters needn’t “make predictions” like human-voters do. It just needs to be the case that their stability is proportional to their ability to “act as if” they predicted other neurons’ predictions (and so on).
I messed up. I meant to comment on another comment of yours, the one replying to niplav’s post about fat tails disincentivizing compromise. That was the one I really wished I could bookmark.
Oh! Well, I’m as happy about receiving a compliment for that as I am for what I thought I got the compliment for, so I forgive you. Thanks! :D
I think hastening of subgoal completion[1] is some evidence for the notion that competitive inter-neuronal selection pressures are frequently misaligned with genetic fitness. People (me included) routinely choose to prioritize completing small subtasks in order to reduce cognitive load, even when that strategy predictably costs more net metabolic energy. (But I can think of strong counterexamples.)
The same pattern one meta-level up is “intragenomic conflict”[2], where genetic lineages have had to spend significant selection-power to prevent genes from fighting dirty. For example, the mechanism of meiosis itself may largely be maintained in equilibrium due to the longer-term necessity of preventing stuff like meiotic drives. An allele (or a collusion of them) which successfwly transfer to offspring at a probability of >50%, may increase their relative fitness even if it marginally reduces their phenotype’s viability.
My generalized term for this is “intra-emic conflict” (pinging the concept of an “eme” as defined in the above comment).
Furthermore, the neuremes (association-clusters) you are currently attending to, have an incentive to recruit associated neuremes into attention as well, because then they feed each others’ activity recursively, and can dominate attention for longer. I think of it like association-clusters feeding activity into their “friends” who are most likely to reciprocate.
And because recursive connections between association-clusters tend to reflect some ground truth about causal relationships in the territory, this tends to be highly effective as a mechanism for inference. But there must be edge-cases (though I can’t recall any atm...).
Imagining agentic behaviour in (/taking intentional stance wrt) individual brain-units is great for generating high-level hypotheses about mechanisms, but obviously misfires and don’t try this at home etc etc.