I don’t think your second footnote sufficiently addresses the large variance in 3D visualization abilities (note that I do say visualization, which includes seeing 2D video in your mind of a 3D object and manipulating that smoothly), and overall I’m not sure where you’re getting at if you don’t ground your post in specific predictions about what you expect people can and cannot do thanks to their ability to visualize 3D.
You might be ~conceptually right that our eyes see “2D” and add depth, but *um ackshually*, two eyes each receiving 2D data means you’ve received 4D input (using ML standards, you’ve got 4 input dimensions per time unit, 5 overall in your tensor). It’s very redundant, and that redundancy mostly allows you to extract depth using a local algo, which allows you to create a 3D map in your mental representation. I don’t get why you claim we don’t have a 3D map at the end.
Back to concrete predictions, are there things you expect a strong human visualizer couldn’t do? To give intuition I’d say a strong visualizer has at least the equivalent visualizing, modifying and measuring capabilities of solidworks/blender in their mind. You tell one to visualize a 3D object they know, and they can tell you anything about it.
It seems to me the most important thing you noticed is that in real life we don’t that often see past the surface of things (because the spectrum of light we see doesn’t penetrate most material) and thus most people don’t know the inside of 3D things very well, but that can be explained by lack of exposure rather than inability to understand 3D.
Fwiw looking at the spheres I guessed an approx 2.5 volume ratio. I’m curious, if you visualized yourself picking up these two spheres, imagining them made of a dense metal, one after the other, could you feel one is 2.3 times heavier than the previous?
I also guessed the ratio of the spheres was between 2 and 3 (and clearly larger than 2) by imagining their weight.
I was following along with the post about how we mostly think in terms of surfaces until the orange example. Having peeled many oranges and separated them into sections, they are easy for me to imagine in 3D, and I have only a weak “mind’s eye” and moderate 3D spatial reasoning ability.
I find your first point particularly interesting—I always thought that weights are quite hard to estimate and intuit. I mean of course it’s quite doable to roughly assess whether one would be able to, say, carry an object or not. But when somebody shows me a random object and I’m supposed to guess the weight, I’m easily off by a factor of 2+, which is much different from e.g. distances (and rather in line with areas and volumes).
I don’t think your second footnote sufficiently addresses the large variance in 3D visualization abilities (note that I do say visualization, which includes seeing 2D video in your mind of a 3D object and manipulating that smoothly), and overall I’m not sure where you’re getting at if you don’t ground your post in specific predictions about what you expect people can and cannot do thanks to their ability to visualize 3D.
You might be ~conceptually right that our eyes see “2D” and add depth, but *um ackshually*, two eyes each receiving 2D data means you’ve received 4D input (using ML standards, you’ve got 4 input dimensions per time unit, 5 overall in your tensor). It’s very redundant, and that redundancy mostly allows you to extract depth using a local algo, which allows you to create a 3D map in your mental representation. I don’t get why you claim we don’t have a 3D map at the end.
Back to concrete predictions, are there things you expect a strong human visualizer couldn’t do? To give intuition I’d say a strong visualizer has at least the equivalent visualizing, modifying and measuring capabilities of solidworks/blender in their mind. You tell one to visualize a 3D object they know, and they can tell you anything about it.
It seems to me the most important thing you noticed is that in real life we don’t that often see past the surface of things (because the spectrum of light we see doesn’t penetrate most material) and thus most people don’t know the inside of 3D things very well, but that can be explained by lack of exposure rather than inability to understand 3D.
Fwiw looking at the spheres I guessed an approx 2.5 volume ratio. I’m curious, if you visualized yourself picking up these two spheres, imagining them made of a dense metal, one after the other, could you feel one is 2.3 times heavier than the previous?
I also guessed the ratio of the spheres was between 2 and 3 (and clearly larger than 2) by imagining their weight.
I was following along with the post about how we mostly think in terms of surfaces until the orange example. Having peeled many oranges and separated them into sections, they are easy for me to imagine in 3D, and I have only a weak “mind’s eye” and moderate 3D spatial reasoning ability.
I find your first point particularly interesting—I always thought that weights are quite hard to estimate and intuit. I mean of course it’s quite doable to roughly assess whether one would be able to, say, carry an object or not. But when somebody shows me a random object and I’m supposed to guess the weight, I’m easily off by a factor of 2+, which is much different from e.g. distances (and rather in line with areas and volumes).