Well said (including your later comment about color constancy). Along the same lines, this is why cameras often show objects in shadows as blacked out—because that’s the actual image it’s getting, and the image your own retinas get! It’s just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you, so you can still see significant contrast and colors in the shadowed objects.
Along the same lines, this is why cameras often show objects in shadows as blacked out—because that’s the actual image it’s getting, and the image your own retinas get! It’s just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you
That doesn’t explain why faithful reproductions of images with shadows don’t prompt the same reinterpretation by your brain.
Blacked out shadows are generally an indication of a failure to generate a ‘faithful’ reproduction due to dynamic range limitations of the camera and/or display medium. There is a fair amount of research into how to work around these limitations through tone mapping. High Dynamic Range cameras and displays are also an area of active research. There’s not really anything to explain here beyond the fact that we currently lack the capture or display capability to faithfully reproduce such scenes.
Sure it does—Faithful reproductions give the shadowed portion the appropriate colors for matching how your brain would perceive a real-life shadowed portion of a scene.
Umm, that’s not what I meant by “faithful reproductions”, and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don’t see how that’s possible. I am familiar with the phenomenon, but I’m not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
I thought you meant “faithful” in the sense of “seeing this is like seeing the real thing”, not “seeing this is learning what your retinas actually get”. If you show a photograph that shows exactly what hit the film (no filters or processing), then dark portions stay dark.
When you see the scene in real life, you subtract off the average coloring that can be deceiving. When you see the photo, you see it as a photo, and you use your current real-life-background and lighting to determine the average color of your visual field. The darkness on the photo deviates significantly from this, while it does not so deviate when you’re immersed in the actual scene, and have enough information about the shadow for your brain to subtract off the excessive blackness.
Well said (including your later comment about color constancy). Along the same lines, this is why cameras often show objects in shadows as blacked out—because that’s the actual image it’s getting, and the image your own retinas get! It’s just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you, so you can still see significant contrast and colors in the shadowed objects.
That doesn’t explain why faithful reproductions of images with shadows don’t prompt the same reinterpretation by your brain.
Blacked out shadows are generally an indication of a failure to generate a ‘faithful’ reproduction due to dynamic range limitations of the camera and/or display medium. There is a fair amount of research into how to work around these limitations through tone mapping. High Dynamic Range cameras and displays are also an area of active research. There’s not really anything to explain here beyond the fact that we currently lack the capture or display capability to faithfully reproduce such scenes.
Sure it does—Faithful reproductions give the shadowed portion the appropriate colors for matching how your brain would perceive a real-life shadowed portion of a scene.
Umm, that’s not what I meant by “faithful reproductions”, and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don’t see how that’s possible. I am familiar with the phenomenon, but I’m not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
I thought you meant “faithful” in the sense of “seeing this is like seeing the real thing”, not “seeing this is learning what your retinas actually get”. If you show a photograph that shows exactly what hit the film (no filters or processing), then dark portions stay dark.
When you see the scene in real life, you subtract off the average coloring that can be deceiving. When you see the photo, you see it as a photo, and you use your current real-life-background and lighting to determine the average color of your visual field. The darkness on the photo deviates significantly from this, while it does not so deviate when you’re immersed in the actual scene, and have enough information about the shadow for your brain to subtract off the excessive blackness.
Been a long day, hope I’m making sense.