Sure it does—Faithful reproductions give the shadowed portion the appropriate colors for matching how your brain would perceive a real-life shadowed portion of a scene.
Umm, that’s not what I meant by “faithful reproductions”, and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don’t see how that’s possible. I am familiar with the phenomenon, but I’m not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
I thought you meant “faithful” in the sense of “seeing this is like seeing the real thing”, not “seeing this is learning what your retinas actually get”. If you show a photograph that shows exactly what hit the film (no filters or processing), then dark portions stay dark.
When you see the scene in real life, you subtract off the average coloring that can be deceiving. When you see the photo, you see it as a photo, and you use your current real-life-background and lighting to determine the average color of your visual field. The darkness on the photo deviates significantly from this, while it does not so deviate when you’re immersed in the actual scene, and have enough information about the shadow for your brain to subtract off the excessive blackness.
Sure it does—Faithful reproductions give the shadowed portion the appropriate colors for matching how your brain would perceive a real-life shadowed portion of a scene.
Umm, that’s not what I meant by “faithful reproductions”, and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don’t see how that’s possible. I am familiar with the phenomenon, but I’m not sure where I go wrong in my thought experiment.
* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.
I thought you meant “faithful” in the sense of “seeing this is like seeing the real thing”, not “seeing this is learning what your retinas actually get”. If you show a photograph that shows exactly what hit the film (no filters or processing), then dark portions stay dark.
When you see the scene in real life, you subtract off the average coloring that can be deceiving. When you see the photo, you see it as a photo, and you use your current real-life-background and lighting to determine the average color of your visual field. The darkness on the photo deviates significantly from this, while it does not so deviate when you’re immersed in the actual scene, and have enough information about the shadow for your brain to subtract off the excessive blackness.
Been a long day, hope I’m making sense.