I’m not clear on what timeframe do they think the ‘modules’ have evolved in, but they argue for a bunch of high level ones related to cognition, as described here:
The argument presented here in course notes is that we have some sort of social contracts module, circa—not quite sure but rather recent—that helps us solve Wason Selection Task better—when we read off paper, mind you, in english language, better when it mentions social contracts than when its letters and numbers. Apparently we have some domain limited intelligence that does Wason Selection Task correctly using logic when it is presented as social contracts—in 75% of people, but sleeps in 75% of people when it is presented as letters and numbers. This is pretty ridiculous. The Wason selection task is a pretty regular language processing bug—you misinterpret it off carelessly as instructions to execute—flip cards with numbers, see if the even have red on the back—rather than as propositions to be tested. When you word it as something concrete, be it social contract, or testing of the medications, then people actually think beyond careless misunderstanding. In natural language, if A then B is very often misused when one wants to say iff A then B.
edit: that is to say, the field is badly suffering from lack of ‘probability of evolving’ prior, akin to the occam″s razor used elsewhere. The difference is more than adequately explained by a zillion causes having nothing to do with evolution and everything to do with education and culture. It’s a huge give-away that you can train people to be a lot better on this and similar tasks; the ‘evolved module’ explanation is redundant, and on top of this is highly implausible unless one’s treating evolution as magic.
I’m not clear on what timeframe do they think the ‘modules’ have evolved in, but they argue for a bunch of high level ones related to cognition, as described here:
http://www.ling.upenn.edu/courses/hum100/evolutionary_psychology.html′
The argument presented here in course notes is that we have some sort of social contracts module, circa—not quite sure but rather recent—that helps us solve Wason Selection Task better—when we read off paper, mind you, in english language, better when it mentions social contracts than when its letters and numbers. Apparently we have some domain limited intelligence that does Wason Selection Task correctly using logic when it is presented as social contracts—in 75% of people, but sleeps in 75% of people when it is presented as letters and numbers. This is pretty ridiculous. The Wason selection task is a pretty regular language processing bug—you misinterpret it off carelessly as instructions to execute—flip cards with numbers, see if the even have red on the back—rather than as propositions to be tested. When you word it as something concrete, be it social contract, or testing of the medications, then people actually think beyond careless misunderstanding. In natural language, if A then B is very often misused when one wants to say iff A then B.
edit: that is to say, the field is badly suffering from lack of ‘probability of evolving’ prior, akin to the occam″s razor used elsewhere. The difference is more than adequately explained by a zillion causes having nothing to do with evolution and everything to do with education and culture. It’s a huge give-away that you can train people to be a lot better on this and similar tasks; the ‘evolved module’ explanation is redundant, and on top of this is highly implausible unless one’s treating evolution as magic.