Not sure I see what problem the post is talking about. We perceive A as darker than B, but actually A and B have the same brightness (as measured by a brightness-measuring device). The brain doesn’t remove shadow, it adjusts the perceived brightness of things in shadow.
Well, I’m saying “we perceive” here evokes a mental model where “we” (a homunculus, or more charitably, a part of our brain) get a corrected image. But I don’t think this is what is happening. Instead, “we” get a more sophisticated data-stream which interprets the image.
With color perception, we typically settle for a simple story where rods and cones translate light into a specific color-space. But the real story is far more complicated. The brain translates things into many different color spaces, at different stages of processing. At some point it probably doesn’t make sense to speak in terms of color spaces any more (when we’re knee deep in higher-level features).
The challenge is to speak about this in a sensible way without saying wrong-headed things. We don’t want to just reduce to what’s physically going on; we also want to recover whatever is worth recovering of our talk of “experiencing”.
Another example: perspective warps 3D space into 2D space in a particular way. But actually, the retina is curved, which changes the perspective mapping somewhat. Naively, we might think this will change which lines we perceive to be straight near the periphery of our vision. But should it? We have lived all our lives with a curved retina. Should we not have learned what lines are straight through experience? I’m not predicting positively that we will be correct about what’s curved/straight in our peripheral vision; I’m trying to point out that getting it wrong in the way the naive math of curved retinas suggests would require some very specific wrong machinery in the brain, such that it’s not obvious evolution would put it there.
Moreover, like how we transform through many different color spaces in visual processing, we might transform through many different space-spaces too (if that makes sense!). Just because an image is projected on the retina in a slightly wonky way doesn’t mean that’s final.
I’m trying to point in the direction of thinking about this kind of thing in a non-confused way.
Not sure I see what problem the post is talking about. We perceive A as darker than B, but actually A and B have the same brightness (as measured by a brightness-measuring device). The brain doesn’t remove shadow, it adjusts the perceived brightness of things in shadow.
Well, I’m saying “we perceive” here evokes a mental model where “we” (a homunculus, or more charitably, a part of our brain) get a corrected image. But I don’t think this is what is happening. Instead, “we” get a more sophisticated data-stream which interprets the image.
With color perception, we typically settle for a simple story where rods and cones translate light into a specific color-space. But the real story is far more complicated. The brain translates things into many different color spaces, at different stages of processing. At some point it probably doesn’t make sense to speak in terms of color spaces any more (when we’re knee deep in higher-level features).
The challenge is to speak about this in a sensible way without saying wrong-headed things. We don’t want to just reduce to what’s physically going on; we also want to recover whatever is worth recovering of our talk of “experiencing”.
Another example: perspective warps 3D space into 2D space in a particular way. But actually, the retina is curved, which changes the perspective mapping somewhat. Naively, we might think this will change which lines we perceive to be straight near the periphery of our vision. But should it? We have lived all our lives with a curved retina. Should we not have learned what lines are straight through experience? I’m not predicting positively that we will be correct about what’s curved/straight in our peripheral vision; I’m trying to point out that getting it wrong in the way the naive math of curved retinas suggests would require some very specific wrong machinery in the brain, such that it’s not obvious evolution would put it there.
Moreover, like how we transform through many different color spaces in visual processing, we might transform through many different space-spaces too (if that makes sense!). Just because an image is projected on the retina in a slightly wonky way doesn’t mean that’s final.
I’m trying to point in the direction of thinking about this kind of thing in a non-confused way.