In nature, you can imagine species undergoing selection on several levels / time-horizons. If long-term fitness-considerations for genes differ from short-term considerations, long-term selection (let’s call this “longscopic”) may imply net fitness-advantage for genes which remove options wrt climbing the shortscopic gradient.
Meiosis as a “veil of cooperation”
Holly suggests this explains the origin of meiosis itself. Recombination randomizes which alleles you end up with in the next generation so it’s harder for you to collude with a subset of them. And this forces you (as an allele hypothetically planning ahead) to optimize/cooperate for the benefit of all the other alleles in your DNA.[1] I call it a “veil of cooperation”[2], because to works by preventing you from “knowing” which situation you end up in (ie, it destroys options wrt which correlations you can “act on” / adapt to).
Compare that to, say, postsegregational killing mechanisms rampant[3] in prokaryotes. Genes on a single plasmid ensure that when the host organism copies itself, any host-copy that don’t also include a copy of the plasmid are killed by internal toxins. This has the effect of increasing the plasmid’s relative proportion in the host species, so without mechanisms preventing internal misalignment like that, the adaptation remains stable.
There’s constant fighting in local vs global & shortscopic vs longscopic gradients all across everything, and cohesive organisms enforce global/long selection-scopes by restricting the options subcomponents have to propagate themselves.
Generalization in the brain as an alignment mechanism against shortscopic dimensions of its reward functions (ie prevents overfitting)
Another example: REM-sleep & episodic daydreaming provides constant generalization-pressure for neuremic adaptations (learned behaviors) to remain beneficial across all the imagined situations (and chaotic noise) your brain puts them through. Again an example of a shortscopic gradient constantly aligning to a longscopic gradient.
Some abstractions for thinking about internal competition between subdimensions of a global gradient
For example, you can imagine each set of considerations as a loss gradient over genetic-possibility-space, and the gradients diverging from each other on specific dimensions. Points where they intersect from different directions are “pleiotropic/polytelic pinch-points”, and represent the best compromise geneset for both gradients—sorta like an equilibrium price in a supply-&-demand framework.
To take the economics-perspective further: if a system (an economy, a gene pool, a brain, whatever) is at equilibrium price wrt the many dimensions of its adaptation-landscape[4] (whether the dimensions be primary rewards or acquired proxies), then globally-misaligned local collusions can be viewed as inframarginal trade[5]. Thus I find a #succinct-statement from my notes:
“mesaoptimizers (selfish emes) evolve in the inframarginal rent (~slack) wrt to the global loss-function.”
(Thanks for prompting me to rediscover it!)
So, take a brain-example again: My brain has both shortscopic and longscopic reward-proxies & behavioral heuristics. When I postpone bedtime in order to, say, get some extra work done because I feel behind; then the neuremes representing my desire to get work done now are bidding for decision-weight at some price[6], and decision-weight-producers will fulfill the trades & provide up to equilibrium. But unfortunately, those neuremes have cheated the market by isolating the bidding-war to shortscopic bidders (ie enforced a particularly narrow perspective), because if they hadn’t, then the neuremes representing longscopic concerns would fairly outbid them.[7]
(Note: The economicsy thing is a very incomplete metaphor, and I’m probably messing things up, but this is theory, so communicating promising-seeming mistakes is often as helpfwl as being correct-but-slightly-less-bold.)
Inframarginal trade: Trade in which producers & consumers match at off-equilibrium price, and which requires the worse-off party to not have the option of getting their thing cheaper at the global equilibrium-price. Thus it reflects a local-global disparity in which trades things are willing to make (ie which interactions are incentivized).
The “price” in this case may be that any assembly of neurons which “bids” for relevancy to current activity takes on some risk of depotentiation if it then fails synchronize. That is, if its firing rate slips off the harmonics of the dominant oscillations going on at present, and starts firing into the STDP-window for depotentiation.
If they weren’t excluded from the market, bedtime-maintenance-neuremes would outbid working-late-neuremes, with bids reflecting the brain’s expectation that maintaining bedtime has higher utility long-term compared to what can be greedily grabbed right now. (Because BEDTIME IS IMPORTANT!) :p
In nature, you can imagine species undergoing selection on several levels / time-horizons. If long-term fitness-considerations for genes differ from short-term considerations, long-term selection (let’s call this “longscopic”) may imply net fitness-advantage for genes which remove options wrt climbing the shortscopic gradient.
Meiosis as a “veil of cooperation”
Holly suggests this explains the origin of meiosis itself. Recombination randomizes which alleles you end up with in the next generation so it’s harder for you to collude with a subset of them. And this forces you (as an allele hypothetically planning ahead) to optimize/cooperate for the benefit of all the other alleles in your DNA.[1] I call it a “veil of cooperation”[2], because to works by preventing you from “knowing” which situation you end up in (ie, it destroys options wrt which correlations you can “act on” / adapt to).
Compare that to, say, postsegregational killing mechanisms rampant[3] in prokaryotes. Genes on a single plasmid ensure that when the host organism copies itself, any host-copy that don’t also include a copy of the plasmid are killed by internal toxins. This has the effect of increasing the plasmid’s relative proportion in the host species, so without mechanisms preventing internal misalignment like that, the adaptation remains stable.
There’s constant fighting in local vs global & shortscopic vs longscopic gradients all across everything, and cohesive organisms enforce global/long selection-scopes by restricting the options subcomponents have to propagate themselves.
Generalization in the brain as an alignment mechanism against shortscopic dimensions of its reward functions (ie prevents overfitting)
Another example: REM-sleep & episodic daydreaming provides constant generalization-pressure for neuremic adaptations (learned behaviors) to remain beneficial across all the imagined situations (and chaotic noise) your brain puts them through. Again an example of a shortscopic gradient constantly aligning to a longscopic gradient.
Some abstractions for thinking about internal competition between subdimensions of a global gradient
For example, you can imagine each set of considerations as a loss gradient over genetic-possibility-space, and the gradients diverging from each other on specific dimensions. Points where they intersect from different directions are “pleiotropic/polytelic pinch-points”, and represent the best compromise geneset for both gradients—sorta like an equilibrium price in a supply-&-demand framework.
To take the economics-perspective further: if a system (an economy, a gene pool, a brain, whatever) is at equilibrium price wrt the many dimensions of its adaptation-landscape[4] (whether the dimensions be primary rewards or acquired proxies), then globally-misaligned local collusions can be viewed as inframarginal trade[5]. Thus I find a #succinct-statement from my notes:
(Thanks for prompting me to rediscover it!)
So, take a brain-example again: My brain has both shortscopic and longscopic reward-proxies & behavioral heuristics. When I postpone bedtime in order to, say, get some extra work done because I feel behind; then the neuremes representing my desire to get work done now are bidding for decision-weight at some price[6], and decision-weight-producers will fulfill the trades & provide up to equilibrium. But unfortunately, those neuremes have cheated the market by isolating the bidding-war to shortscopic bidders (ie enforced a particularly narrow perspective), because if they hadn’t, then the neuremes representing longscopic concerns would fairly outbid them.[7]
(Note: The economicsy thing is a very incomplete metaphor, and I’m probably messing things up, but this is theory, so communicating promising-seeming mistakes is often as helpfwl as being correct-but-slightly-less-bold.)
ie, it marginally flattens the intra-genomic competition-gradient, thereby making cooperative fitness-dimensions relatively steeper.
from “veil of ignorance”.
or at least that’s the word they used… I haven’t observed this rampancy directly.
aka “loss-function”
Inframarginal trade: Trade in which producers & consumers match at off-equilibrium price, and which requires the worse-off party to not have the option of getting their thing cheaper at the global equilibrium-price. Thus it reflects a local-global disparity in which trades things are willing to make (ie which interactions are incentivized).
The “price” in this case may be that any assembly of neurons which “bids” for relevancy to current activity takes on some risk of depotentiation if it then fails synchronize. That is, if its firing rate slips off the harmonics of the dominant oscillations going on at present, and starts firing into the STDP-window for depotentiation.
If they weren’t excluded from the market, bedtime-maintenance-neuremes would outbid working-late-neuremes, with bids reflecting the brain’s expectation that maintaining bedtime has higher utility long-term compared to what can be greedily grabbed right now. (Because BEDTIME IS IMPORTANT!) :p