We might also ask: How well do philosophers perform on standard tests of rationality, for example Frederick (2005)’s CRT?...
Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.
This claim is misleading for several reasons:
1) The study from which you get the philosophers’ score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don’t think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.
2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers’ mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or graduate/professional education respectively. If these two studies were using similar tests and methodologies, we should expect these scores to converge more. It seems likely that the Fredrick study is not using comparable methodology or controls, making the straight comparison of scores misleading.
3) The Livengood study actually argues that people with some philosophical training tend to do significantly better than the rest of the population on the CRT test, even when one controls for education. You do not mention this. You really ought to. Especially since, unlike the Fredrick study, the Livengood study is the only one you cite which uses a methodology relevant to the question you’re asking.
Your presentation here seems misleading to me. You imply that philosophers are merely average scorers on the CRT relative to the rest of the (similarly educated) population.
This claim is misleading for several reasons: 1) The study from which you get the philosophers’ score is a mean score for people who have had some graduate level philosophical training. This is a set that will overlap with many of the other groups you mention. While it will include all professional philosophers, I don’t think a majority of the set will be professional philosophers. Graduate level logic or political philosophy, etc. courses are pretty standard in graduate educations across the board.
2) Fredrick takes scores from a variety of different schools, trying to capture people, evidentially, who are undergraduates, graduate students, or faculty. Fredrick comes up with a mean score of 1.24 for respondents who are members of a university. In contrast, Livengood (from which you get the philosophers’ mean score) gets a mean score of 0.65 and 0.82 for people with undergraduate or graduate/professional education respectively. If these two studies were using similar tests and methodologies, we should expect these scores to converge more. It seems likely that the Fredrick study is not using comparable methodology or controls, making the straight comparison of scores misleading.
3) The Livengood study actually argues that people with some philosophical training tend to do significantly better than the rest of the population on the CRT test, even when one controls for education. You do not mention this. You really ought to. Especially since, unlike the Fredrick study, the Livengood study is the only one you cite which uses a methodology relevant to the question you’re asking.