I think there’s a lot of research that shows we’re fairly bad at predicting how other people see the world, and how much detail there is in their heads. I’ve read quite a few books that talk about how some people presume that other people are speaking metaphorically when they talk about imagining scenes in color or in 3-d, or how they can “hear” a musical piece in their mind. Those who can assume that others just aren’t trying. Face-blindness wasn’t recognized for quite a while.
Some people are much better at doing mental rotations of 3-d objects than others. I can do a decent mental image of the inside of an orange, an apple, a persimmon, or a pear, but perhaps I’ve spent more time cutting up fruit than others. My mental image of the internal shape of the branches in our persimmon tree is pretty detailed, since I’ve been climbing inside to pick fruit and trim for 35 years.
There was a time when I was working on n-dimensional data structures that I could cleanly think in 4 or 5 dimensional “images”. They weren’t quite visual, since vision is so 2-d, but I could independently manipulate features of the various dimensions separately.
When looking at full-color stereograms, you have to have a mental model of the image depth to make sense of it, even if the rendering is all of surfaces.
There was a time when I was working on n-dimensional data structures that I could cleanly think in 4 or 5 dimensional “images”. They weren’t quite visual, since vision is so 2-d, but I could independently manipulate features of the various dimensions separately.
This remark is really interesting. It seems related to the brain rewiring that happens after, say, a subject has been blindfolded for a week, in that their hearing and tactile discrimination improves a lot to compensate.
I think there’s a lot of research that shows we’re fairly bad at predicting how other people see the world, and how much detail there is in their heads. I’ve read quite a few books that talk about how some people presume that other people are speaking metaphorically when they talk about imagining scenes in color or in 3-d, or how they can “hear” a musical piece in their mind. Those who can assume that others just aren’t trying. Face-blindness wasn’t recognized for quite a while.
Some people are much better at doing mental rotations of 3-d objects than others. I can do a decent mental image of the inside of an orange, an apple, a persimmon, or a pear, but perhaps I’ve spent more time cutting up fruit than others. My mental image of the internal shape of the branches in our persimmon tree is pretty detailed, since I’ve been climbing inside to pick fruit and trim for 35 years.
google.com/images?q=mental+rotation+three-dimensional+objects
There was a time when I was working on n-dimensional data structures that I could cleanly think in 4 or 5 dimensional “images”. They weren’t quite visual, since vision is so 2-d, but I could independently manipulate features of the various dimensions separately.
When looking at full-color stereograms, you have to have a mental model of the image depth to make sense of it, even if the rendering is all of surfaces.
This remark is really interesting. It seems related to the brain rewiring that happens after, say, a subject has been blindfolded for a week, in that their hearing and tactile discrimination improves a lot to compensate.
Blink. Were there any significant downsides? And did the improvements persist, or diminish over time?