A row of approximately 3,500 inner hair cells (IHC’s) are situated along the basilar membrane, picking up the resonances generated by the incoming waves. The inner hair cells are spread out exponentially over the 3.4 centimetre length of the tube—with many more hair cells at the beginning (high frequencies) than at the end (low frequencies). Each inner hair cell picks up the vibrations of the membrane at a particular point—thus tuned to a particular frequency. The ‘highest’ hair cell is at 20 kHz, the ‘lowest’ at 20 Hz—with a very steep tuning curve at high frequencies, rejecting any frequency above 20 kHz.
When a hair vibrates due to incoming sound, it sends an action potential to the brain. So we have three types of sensors for seeing, but thousands for hearing.
But there are even more pixels in the eye. The difference is that these inputs have dimensional structure. A pixel in the center of your vision results in a very similar response to one a degree higher. A sound at 1000hz sounds similar to one at 1100hz.
And in fact the structure of the brain actually enforces this dimensionality. Nearby frequencies have overlapping representations. E.g. 1000hz might be 00111000 and 1100 might be 00011100, representing the inputs which are active.
But colors have no dimensionality. Red is qualitatively different than blue. They are different kinds of inputs.
I guess part of the problem is that there are only three color receptors, rather than thousands, so there is less reason to represent commonalities between them. That said, we do talk about “warm” and “cool” colors, which mainly seems to refer to how much blue is mixes in them. So that seems a bit like a one-dimensional “heat” scale, with blue on the cold end and red/green on the work end?
Actually, it’s almost the other way around.
When a hair vibrates due to incoming sound, it sends an action potential to the brain. So we have three types of sensors for seeing, but thousands for hearing.
But there are even more pixels in the eye. The difference is that these inputs have dimensional structure. A pixel in the center of your vision results in a very similar response to one a degree higher. A sound at 1000hz sounds similar to one at 1100hz.
And in fact the structure of the brain actually enforces this dimensionality. Nearby frequencies have overlapping representations. E.g. 1000hz might be 00111000 and 1100 might be 00011100, representing the inputs which are active.
But colors have no dimensionality. Red is qualitatively different than blue. They are different kinds of inputs.
I guess part of the problem is that there are only three color receptors, rather than thousands, so there is less reason to represent commonalities between them. That said, we do talk about “warm” and “cool” colors, which mainly seems to refer to how much blue is mixes in them. So that seems a bit like a one-dimensional “heat” scale, with blue on the cold end and red/green on the work end?