Because it doesn’t have a specificity of 1, or another reason? Even if we are judging the test based on just this datapoint, it seems to me the author has difficulty manipulating their mental model of others’ minds, to the point where their inability to pass the Sally-Anne tests is informative.
Because it works better as a test of language ability. People can model others’ minds but get lost in the many clauses. (Higher levels increase difficulty by increasing number of clauses and difficulty of phrasing.)
This is common knowledge in the autistic community, but I hardly blame you for not knowing it. Most people don’t, unless all their friends are autistic or Aspies.
Because it works better as a test of language ability. People can model others’ minds but get lost in the many clauses.
That sounds plausible, but how would you determine that it’s language ability specifically that’s causing the issue? Do they pass the basic Sally-Anne test (“Where will Sally look for her marble?”) but fail more complicated versions? Do they pass clear versions but not wordy versions (passing “Where will Sally look for her marble?” but failing “Where would Sally tell us she believes the marble is?”)?
It seems to me that language failure could be because of theory of mind failure. For example, if I’m trying to trick someone trying to trick me, what I think he thinks I think he thinks is an unconsciously constructed object in my mental model. Describing it is a little tricky because there’s not a single word for it, but making predictions / planning actions based on it is not difficult. If someone doesn’t have that in their mental model, but has to construct it, then it seems to me the first place they’ll notice difficulty is parsing the question- they look inside for something with the tag “what I think he thinks I think he thinks,” find nothing, and conclude they probably didn’t hear / parse it correctly. But, this is speculation by a non-psychologist, and so evidence-driven opinions are more welcome.
Yes, people who pass the basic test fail on tests with more complicated language structures. (Hint: when your test is so poorly-designed that you can’t even be certain whether you’re testing a concept that exists, and even given that you are, whether your test is testing it, the test is crap even if it’s not biased against the people you purport to study.)
You have normal theory of mind, I would assume. This includes recursive theory of mind (I know you know I know). Do you think that you could still be confused by “Sally thinks that Harry thinks that Sally thinks that I think that Sally thinks that whatever” or something similar?
when your test is so poorly-designed that you can’t even be certain whether you’re testing a concept that exists, and even given that you are, whether your test is testing it, the test is crap even if it’s not biased against the people you purport to study.
It’s not my test, and I can’t comment on the certainty of people who devised it / administer it today, whose opinions I suspect are more informed than mine.
Do you think that you could still be confused by “Sally thinks that Harry thinks that Sally thinks that I think that Sally thinks that whatever” or something similar?
If spoken too quickly, sure. If the test were written (or spoken slowly), I think I would give the right answer to 6th order questions at least 90% of the time.
The paper they link to (here) doesn’t seem to be as strong as they present it in the post. I certainly agree that Baron-Cohen’s claim that ToM can’t be learned sounds wrong, unless he’s arguing about brain structure rather than performance (that is, they can learn how to answer the questions correctly but never as easily as a neurotypical).
I also followed the citation trail to come across this paper, which included picture-based tests. An example: A green apple was placed in front of the subject and they were given a green marker (with red ink). They drew the apple someplace they couldn’t see, and then the researcher put an identical red apple next to the green apple, then showed them their drawing, and asked “Which of these apples were you trying to draw?” and “When X enters the room, which apple will they think you were trying to draw?”
They tested normal 4 year-olds and deaf or autistic children (5 to 13, average age 9) on the false drawing task and a standard false belief task (what’s in the box? Not what’s on the label! What will X think is in the box? What did you think was in the box before I opened it?). The normals mostly passed the standard test and mostly failed the false drawing task; the deaf or autistic mostly failed the standard test and mostly passed the false drawing task. (Normal children of age average 9 were not tested; I presume they would mostly pass both tests.)
I now have a much better idea of what a non-verbal false belief test would look like, but I still think both varieties of test are useful at identifying ToM delays / deficiencies. That the normal 4 year olds do poorly on the pictorial false-belief tests suggests to me that it also is not just testing ToM, but something else as well.
Because it doesn’t have a specificity of 1, or another reason? Even if we are judging the test based on just this datapoint, it seems to me the author has difficulty manipulating their mental model of others’ minds, to the point where their inability to pass the Sally-Anne tests is informative.
Because it works better as a test of language ability. People can model others’ minds but get lost in the many clauses. (Higher levels increase difficulty by increasing number of clauses and difficulty of phrasing.)
This is common knowledge in the autistic community, but I hardly blame you for not knowing it. Most people don’t, unless all their friends are autistic or Aspies.
That sounds plausible, but how would you determine that it’s language ability specifically that’s causing the issue? Do they pass the basic Sally-Anne test (“Where will Sally look for her marble?”) but fail more complicated versions? Do they pass clear versions but not wordy versions (passing “Where will Sally look for her marble?” but failing “Where would Sally tell us she believes the marble is?”)?
It seems to me that language failure could be because of theory of mind failure. For example, if I’m trying to trick someone trying to trick me, what I think he thinks I think he thinks is an unconsciously constructed object in my mental model. Describing it is a little tricky because there’s not a single word for it, but making predictions / planning actions based on it is not difficult. If someone doesn’t have that in their mental model, but has to construct it, then it seems to me the first place they’ll notice difficulty is parsing the question- they look inside for something with the tag “what I think he thinks I think he thinks,” find nothing, and conclude they probably didn’t hear / parse it correctly. But, this is speculation by a non-psychologist, and so evidence-driven opinions are more welcome.
Yes, people who pass the basic test fail on tests with more complicated language structures. (Hint: when your test is so poorly-designed that you can’t even be certain whether you’re testing a concept that exists, and even given that you are, whether your test is testing it, the test is crap even if it’s not biased against the people you purport to study.)
You have normal theory of mind, I would assume. This includes recursive theory of mind (I know you know I know). Do you think that you could still be confused by “Sally thinks that Harry thinks that Sally thinks that I think that Sally thinks that whatever” or something similar?
http://www.staff.ncl.ac.uk/daniel.nettle/liddlenettle.pdf Feel free to draw your own conclusions from an actual study using a theory of mind test. Here’s a critique of it: http://www.wrongplanet.net/postp3314609.html#3314609
This is a critique of the test in its usual usage, which isn’t for neurologically normal three-year-olds.
It’s not my test, and I can’t comment on the certainty of people who devised it / administer it today, whose opinions I suspect are more informed than mine.
If spoken too quickly, sure. If the test were written (or spoken slowly), I think I would give the right answer to 6th order questions at least 90% of the time.
The paper they link to (here) doesn’t seem to be as strong as they present it in the post. I certainly agree that Baron-Cohen’s claim that ToM can’t be learned sounds wrong, unless he’s arguing about brain structure rather than performance (that is, they can learn how to answer the questions correctly but never as easily as a neurotypical).
I also followed the citation trail to come across this paper, which included picture-based tests. An example: A green apple was placed in front of the subject and they were given a green marker (with red ink). They drew the apple someplace they couldn’t see, and then the researcher put an identical red apple next to the green apple, then showed them their drawing, and asked “Which of these apples were you trying to draw?” and “When X enters the room, which apple will they think you were trying to draw?”
They tested normal 4 year-olds and deaf or autistic children (5 to 13, average age 9) on the false drawing task and a standard false belief task (what’s in the box? Not what’s on the label! What will X think is in the box? What did you think was in the box before I opened it?). The normals mostly passed the standard test and mostly failed the false drawing task; the deaf or autistic mostly failed the standard test and mostly passed the false drawing task. (Normal children of age average 9 were not tested; I presume they would mostly pass both tests.)
I now have a much better idea of what a non-verbal false belief test would look like, but I still think both varieties of test are useful at identifying ToM delays / deficiencies. That the normal 4 year olds do poorly on the pictorial false-belief tests suggests to me that it also is not just testing ToM, but something else as well.