Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs. The child sees a box, sees candy in the box, and sees that Sally sees the box. Sally leaves, and then the experimenter, in front of the child, replaces the candy with pencils and closes the box so that the inside is not visible. Sally returns, and the child is asked what Sally thinks is in the box. Children younger than 3 say “pencils”, children older than 4 say “candy”.
Our ability to visualize other minds is imperfect. Neural circuitry is not as flexible as a program fed to a general-purpose computer. An AI, with fast read-write access to its own memory, might be able to create a distinct, simulated visual cortex to imagine what a human “sees”. We humans only have one visual cortex, and if we want to imagine what someone else is seeing, we’ve got to simulate it using our own visual cortex—put our own brains into the other mind’s shoes. And because you can’t reconfigure memory to simulate a new brain from stratch, pieces of you leak into your visualization of the Other.
The diagram above is from Keysar, Barr, Balin, & Brauner (2000). The experimental subject, the “addressee”, sat in front of an array of objects, viewed as seen on the left. On the other side, across from the addressee, sat the “director”, with the view as seen on the right. The addressee had an unblocked view, which also allowed the addressee to see which objects were not visible to the director.
The experiment used the eye-tracking method: the direction of a subject’s gaze can be measured using computer vision. Tanenhaus et. al. (1995) had previously demonstrated that when people understand a spoken reference, their gaze fixates on the identified object almost immediately.
The key test was when the director said “Put the small candle next to the truck.” As the addressee can clearly observe, the director only knows about two candles, the largest and medium ones; the smallest candle is occluded.
And, lo and behold, subjects’ eyes fixated on the occluded smallest candle an average of 1,487 milliseconds before they correctly identified the medium-sized candle as the one the director must have meant.
This seems to suggest that subjects first computed the meaning according to their brains’ settings, their knowledge, and then afterward adjusted for the other mind’s different knowledge.
Numerous experiments suggest that where there is adjustment, there is usually under-adjustment, which leads to anchoring. In this case, “self-anchoring”.
Barr (2003) argues that the processes are actually more akin to contamination and under-correction; we can’t stop ourselves from leaking over, and then we can’t correct for the leakage. Different process, same outcome:
We can put our feet in other minds’ shoes, but we keep our own socks on.
Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ difficulty with false belief: The case for a conceptual deficit. British Journal of Developmental Psychology, 5(2), 125–137.
Self-Anchoring
Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs. The child sees a box, sees candy in the box, and sees that Sally sees the box. Sally leaves, and then the experimenter, in front of the child, replaces the candy with pencils and closes the box so that the inside is not visible. Sally returns, and the child is asked what Sally thinks is in the box. Children younger than 3 say “pencils”, children older than 4 say “candy”.
Our ability to visualize other minds is imperfect. Neural circuitry is not as flexible as a program fed to a general-purpose computer. An AI, with fast read-write access to its own memory, might be able to create a distinct, simulated visual cortex to imagine what a human “sees”. We humans only have one visual cortex, and if we want to imagine what someone else is seeing, we’ve got to simulate it using our own visual cortex—put our own brains into the other mind’s shoes. And because you can’t reconfigure memory to simulate a new brain from stratch, pieces of you leak into your visualization of the Other.
The diagram above is from Keysar, Barr, Balin, & Brauner (2000). The experimental subject, the “addressee”, sat in front of an array of objects, viewed as seen on the left. On the other side, across from the addressee, sat the “director”, with the view as seen on the right. The addressee had an unblocked view, which also allowed the addressee to see which objects were not visible to the director.
The experiment used the eye-tracking method: the direction of a subject’s gaze can be measured using computer vision. Tanenhaus et. al. (1995) had previously demonstrated that when people understand a spoken reference, their gaze fixates on the identified object almost immediately.
The key test was when the director said “Put the small candle next to the truck.” As the addressee can clearly observe, the director only knows about two candles, the largest and medium ones; the smallest candle is occluded.
And, lo and behold, subjects’ eyes fixated on the occluded smallest candle an average of 1,487 milliseconds before they correctly identified the medium-sized candle as the one the director must have meant.
This seems to suggest that subjects first computed the meaning according to their brains’ settings, their knowledge, and then afterward adjusted for the other mind’s different knowledge.
Numerous experiments suggest that where there is adjustment, there is usually under-adjustment, which leads to anchoring. In this case, “self-anchoring”.
Barr (2003) argues that the processes are actually more akin to contamination and under-correction; we can’t stop ourselves from leaking over, and then we can’t correct for the leakage. Different process, same outcome:
We can put our feet in other minds’ shoes, but we keep our own socks on.
Barr, D. J. (2003). Listeners are mentally contaminated. Poster presented at the 44th annual meeting of the Psychonomic Society, Vancouver.
Keysar, B., Barr, D. J., Balin, J. A., & Brauner, J. S. (2000). Taking perspective in conversation: The role of mutual knowledge in comprehension. Psychological Sciences, 11, 32-38.
Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ difficulty with false belief: The case for a conceptual deficit. British Journal of Developmental Psychology, 5(2), 125–137.
Tanenhaus, M.K., Spivey-Knowlton, M.J., Eberhard, K.M. & Sedivy, J.C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science 268: 1632-1634.