Bonus point: neuronal “voting power” is capped at around ~100Hz, so neurons “have an incentive” (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It’s analogous to a winner-takes-all-election where you don’t want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.
That’s why global/conscious “narratives” are essential in the brain—they’re metabolically efficient Schelling-points.
Neuron-voters needn’t “make predictions” like human-voters do. It just needs to be the case that their stability is proportional to their ability to “act as if” they predicted other neurons’ predictions (and so on).
I messed up. I meant to comment on another comment of yours, the one replying to niplav’s post about fat tails disincentivizing compromise. That was the one I really wished I could bookmark.
This comment is making me wish I could bookmark comments on LW. @habryka,
Bonus point: neuronal “voting power” is capped at around ~100Hz, so neurons “have an incentive” (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It’s analogous to a winner-takes-all-election where you don’t want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.
That’s why global/conscious “narratives” are essential in the brain—they’re metabolically efficient Schelling-points.
Neuron-voters needn’t “make predictions” like human-voters do. It just needs to be the case that their stability is proportional to their ability to “act as if” they predicted other neurons’ predictions (and so on).
I messed up. I meant to comment on another comment of yours, the one replying to niplav’s post about fat tails disincentivizing compromise. That was the one I really wished I could bookmark.
Oh! Well, I’m as happy about receiving a compliment for that as I am for what I thought I got the compliment for, so I forgive you. Thanks! :D