My understanding of people on the autism spectrum is that they typically lackordinary ToM
The link says “high-functioning adults with ASD…can easily pass the false belief task when explicitly asked to”. So there you go! Perfectly good ToM, right?
The paper also says they “do not show spontaneous false belief attribution”. But if you look at Figure 3, they “fail” the test by looking equally at the incorrect window and correct window, not by looking disproportionately at the incorrect window. So I would suggest that the most likely explanation is not that the ASD adults are screwing up the ToM task, but rather that they’re taking no interest in the ToM task! Remember, the subjects were never asked to pay any attention to the person! Maybe they just didn’t! So I say this is a case of motivation, not capability. Maybe they were sitting there during the test, thinking to themselves “Gee, that’s a neat diorama, I wonder how the experimenters glued it together!” :-P That would also be consistent with the eye-tracking results mentioned in the book excerpt here. (I recall also a Temple Grandin anecdote (I can’t immediately find it) about getting fMRI’d, and she said she basically ignored the movie she was nominally supposed to be looking at, because she was so interested in some aspect of how the scientists had set up the experiment.) Anyway, the paper you link doesn’t report (AFAICT) what fraction of the time the subjects are looking at neither window—they effectively just throw those trials away I think—which to me seems like discarding the most interesting data!
If it is true that (1) autistic people use mechanisms other than ToM/IRL to understand people (i.e., modeling people like car engines)
I think you misunderstood me here. I’m suggesting that maybe:
ToM ≈ IRL ≈ building a good generative model that explains observations of humans
“understanding car engines” ≈ building a good generative model that explains observations of car engines.
I guess you’re assuming that a good generative model of a mind must contain special ingredients that a good generative model of a car engine does not need? I don’t currently think that. Well, more specifically, I think “the particular general-purpose toolkit that a human brain uses for building generative models” is sufficient for both modeling minds and modeling car engines. (I can imagine other generative-model-building toolkits that are not.) For example, the thought “Sally believes the sky is green” seems to me to be of similarly construction to the thought “The engine is not painted green, but if it were, the paint would quickly rub off and contaminate the engine fluid”. Both kinda involve an invoking and manipulation of a counterfactual world and relating it to the real world. I could be wrong, but anyway that’s what I meant.
The link says “high-functioning adults with ASD…can easily pass the false belief task when explicitly asked to”. So there you go! Perfectly good ToM, right?
The paper also says they “do not show spontaneous false belief attribution”. But if you look at Figure 3, they “fail” the test by looking equally at the incorrect window and correct window, not by looking disproportionately at the incorrect window. So I would suggest that the most likely explanation is not that the ASD adults are screwing up the ToM task, but rather that they’re taking no interest in the ToM task! Remember, the subjects were never asked to pay any attention to the person! Maybe they just didn’t! So I say this is a case of motivation, not capability. Maybe they were sitting there during the test, thinking to themselves “Gee, that’s a neat diorama, I wonder how the experimenters glued it together!” :-P That would also be consistent with the eye-tracking results mentioned in the book excerpt here. (I recall also a Temple Grandin anecdote (I can’t immediately find it) about getting fMRI’d, and she said she basically ignored the movie she was nominally supposed to be looking at, because she was so interested in some aspect of how the scientists had set up the experiment.) Anyway, the paper you link doesn’t report (AFAICT) what fraction of the time the subjects are looking at neither window—they effectively just throw those trials away I think—which to me seems like discarding the most interesting data!
I think you misunderstood me here. I’m suggesting that maybe:
ToM ≈ IRL ≈ building a good generative model that explains observations of humans
“understanding car engines” ≈ building a good generative model that explains observations of car engines.
I guess you’re assuming that a good generative model of a mind must contain special ingredients that a good generative model of a car engine does not need? I don’t currently think that. Well, more specifically, I think “the particular general-purpose toolkit that a human brain uses for building generative models” is sufficient for both modeling minds and modeling car engines. (I can imagine other generative-model-building toolkits that are not.) For example, the thought “Sally believes the sky is green” seems to me to be of similarly construction to the thought “The engine is not painted green, but if it were, the paint would quickly rub off and contaminate the engine fluid”. Both kinda involve an invoking and manipulation of a counterfactual world and relating it to the real world. I could be wrong, but anyway that’s what I meant.