It feels like there’s a never-directly-claimed but oft-implied claim lurking in this essay.
The claim goes: the reason we can’t consciously control our perception of the color of the sky is because if we could then human partisanship would ruin it.
The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible.
If you squint, and treat partisanship as an ontologically basic thing that could exert evolutionary pressure, it almost seems plausible that avoidance of partisanship failure modes might actually be the cause of the wiring of the occipital cortex :-)
However, I don’t personally think that “avoiding the ability of partisanship to ruin vision” is the reason human vision is wired up so that we can’t see whatever we consciously choose to see.
Part of the reason I don’t believe this is that the second half of the implication is simply not universally true. I know people who report having the ability to modify their visual sensorium at will, so for them, it seems to actually be the case that they could choose to do all sorts of things to their visual world model if they put some creativity and effort into it. Also: synesthesia is a thing, and can probably be cultivated...
But even if you skip over such issues as non-central outliers...
It makes conceptual sense to me that there is probably something like a common cortical algorithm (though maybe not exactly like the precise algorithmic sketch being discussed under that name) that actually happens in the brain. Coarsely: it probably has to do with neuron metabolism and how neurons measure and affect each other. Separately from this, there are lots of processes for controlling which neurons are “near” to which other neurons.
My personal guess is that in actual brains, the process mixes sparse/bayesian/pooling/etc perception with negative feedback control… and of course “maybe other stuff too”. But fundamentally I think we start with “all the computing elements potentially measuring and controlling their neighbors” and then when that causes terrible outcomes (like nearly instantaneous subconscious wireheading by organisms with 3 neurons) evolution prunes that particular failure mode out, and then iterates.
However, sometimes top down control of measurement is functional. It happens subconsciously in ancient and useful ways in our own brain, as when afferent cochlear enervation projects something like “expectations about what is to be heard” that make the cochlea differentially sensitive to inputs, effectively increasing the dynamic range in what sounds can be neurologically distinguished.
This theory predicts new wireheading failures at every new level of evolved organization. Each time you make several attempts to build variations on a new kind of measuring and optimizing process/module/layer, some of those processes will use their control elements to manipulate their perception elements, and sometimes they will do poorly rather than well, with wireheading as a large and probably dysfunctional attractor.
“Human partisanship” does seem to be an example of often-broken agency in an evolutionarily recent context (ie the context of super-Dunbar socially/verbally coordinated herds of meme-infected humans) and human partisanship does seem pretty bad… but as far as I can see, partisanship is not conceptually central here. And it isn’t even the conceptually central negative thing.
The central negative thing, in my opinion, is wireheading.
It feels like there’s a never-directly-claimed but oft-implied claim lurking in this essay.
The claim goes: the reason we can’t consciously control our perception of the color of the sky is because if we could then human partisanship would ruin it.
If you squint, and treat partisanship as an ontologically basic thing that could exert evolutionary pressure, it almost seems plausible that avoidance of partisanship failure modes might actually be the cause of the wiring of the occipital cortex :-)
However, I don’t personally think that “avoiding the ability of partisanship to ruin vision” is the reason human vision is wired up so that we can’t see whatever we consciously choose to see.
Part of the reason I don’t believe this is that the second half of the implication is simply not universally true. I know people who report having the ability to modify their visual sensorium at will, so for them, it seems to actually be the case that they could choose to do all sorts of things to their visual world model if they put some creativity and effort into it. Also: synesthesia is a thing, and can probably be cultivated...
But even if you skip over such issues as non-central outliers...
It makes conceptual sense to me that there is probably something like a common cortical algorithm (though maybe not exactly like the precise algorithmic sketch being discussed under that name) that actually happens in the brain. Coarsely: it probably has to do with neuron metabolism and how neurons measure and affect each other. Separately from this, there are lots of processes for controlling which neurons are “near” to which other neurons.
My personal guess is that in actual brains, the process mixes sparse/bayesian/pooling/etc perception with negative feedback control… and of course “maybe other stuff too”. But fundamentally I think we start with “all the computing elements potentially measuring and controlling their neighbors” and then when that causes terrible outcomes (like nearly instantaneous subconscious wireheading by organisms with 3 neurons) evolution prunes that particular failure mode out, and then iterates.
However, sometimes top down control of measurement is functional. It happens subconsciously in ancient and useful ways in our own brain, as when afferent cochlear enervation projects something like “expectations about what is to be heard” that make the cochlea differentially sensitive to inputs, effectively increasing the dynamic range in what sounds can be neurologically distinguished.
This theory predicts new wireheading failures at every new level of evolved organization. Each time you make several attempts to build variations on a new kind of measuring and optimizing process/module/layer, some of those processes will use their control elements to manipulate their perception elements, and sometimes they will do poorly rather than well, with wireheading as a large and probably dysfunctional attractor.
“Human partisanship” does seem to be an example of often-broken agency in an evolutionarily recent context (ie the context of super-Dunbar socially/verbally coordinated herds of meme-infected humans) and human partisanship does seem pretty bad… but as far as I can see, partisanship is not conceptually central here. And it isn’t even the conceptually central negative thing.
The central negative thing, in my opinion, is wireheading.