Well, we could of course draw the analogy between colors of the spectrum and tones of sound
Puzzle: We sense colors, which exist on a continuum, by how near one color is to each of the only 3 colors our retinas can sense directly, plus intensity. We sense tones, which exist on a continuum, directly—we can sense each separate wavelength directly. Yet we have the impression that there are more colors than sounds—we draw sounds on a line, but colors in a plane.
If you’re talking about only a single frequency of light or sound, a 2-dimensional point is enough to represent human perception—one dimension for frequency and another for intensity.
However, if you’re talking about the full range of colors and sounds that humans can distinguish, colors can be described with only 3 dimensions, while an ideal perceptual representation of sound would need a separate dimension for every functioning hair cell.
I figured it out. An ideal perceptual representation of sound would only need 2 hair cells—if hair cells, like cones, reported a distance from the stimulus. A cone cell gives a signal whose intensity indicates how far the wavelength of the light it sensed is from its preferred frequency. 1 cone cell lets you order colors along a ray. 2 cone cells lets you order them along a line. 3 cone cells lets you order them on a plane.
A hair cell is specific to a frequency, so you can’t combine the output from n hair cells to give an n-1 dimensional picture.
An ideal perceptual representation of sound would only need 2 hair cells—if hair cells, like cones, reported a distance from the stimulus.
That’s true if you’re talking about a stimulus that only contains a single frequency at a time, but real sounds and colors are mixtures of an entire spectrum of frequencies, each frequency having its own distinct amplitude.
For example, 2 hair cells, even if they had a wider frequency response, would not be enough to understand speech; for that you need at least 4 to 8 frequency bands.
Puzzle: We sense colors, which exist on a continuum, by how near one color is to each of the only 3 colors our retinas can sense directly, plus intensity. We sense tones, which exist on a continuum, directly—we can sense each separate wavelength directly. Yet we have the impression that there are more colors than sounds—we draw sounds on a line, but colors in a plane.
If you’re talking about only a single frequency of light or sound, a 2-dimensional point is enough to represent human perception—one dimension for frequency and another for intensity.
However, if you’re talking about the full range of colors and sounds that humans can distinguish, colors can be described with only 3 dimensions, while an ideal perceptual representation of sound would need a separate dimension for every functioning hair cell.
I figured it out. An ideal perceptual representation of sound would only need 2 hair cells—if hair cells, like cones, reported a distance from the stimulus. A cone cell gives a signal whose intensity indicates how far the wavelength of the light it sensed is from its preferred frequency. 1 cone cell lets you order colors along a ray. 2 cone cells lets you order them along a line. 3 cone cells lets you order them on a plane.
A hair cell is specific to a frequency, so you can’t combine the output from n hair cells to give an n-1 dimensional picture.
That’s true if you’re talking about a stimulus that only contains a single frequency at a time, but real sounds and colors are mixtures of an entire spectrum of frequencies, each frequency having its own distinct amplitude.
For example, 2 hair cells, even if they had a wider frequency response, would not be enough to understand speech; for that you need at least 4 to 8 frequency bands.
There are more colors than tones, but there are more dimensions to sound than just tone.