This is true, but I’m still not sure how you get from this that engineering training is better for rationality improvement than philosophy training, given that Harvard and Princeton students are already well above the baseline score for average undergraduates.
Assuming there is no selection effect (and this is a pretty big assumption), philosophy training raises the CRT score of the average undergraduate from 0.65 to 1.16. Assuming engineering training accounts for the entire CRT score difference between Princeton and MIT students (another big assumption), engineering training raised their average score from 1.63 to 2.18. How am I supposed to draw conclusions from this data about which training is better for rationality?
I think (a) that there is a probably a big selection effect, and (b) that it is possible that the test used is biased in favor of mathematical rather than generally logical thinking. The CRT is also doesn’t include things like noticing when a word is meaningless, which I would think would be one of the most important skills for philosophers. I’m not sure how one would test that.
I think you’re right that the data don’t show what I had thought. I had thought that professional philosophers did worse than than MIT undergrads, but now it looks like there isn’t data about that. I think I was confusing it with the results from professional American judges (almost all graduates of the other program which claims to teach reasoning).
This is true, but I’m still not sure how you get from this that engineering training is better for rationality improvement than philosophy training, given that Harvard and Princeton students are already well above the baseline score for average undergraduates.
Assuming there is no selection effect (and this is a pretty big assumption), philosophy training raises the CRT score of the average undergraduate from 0.65 to 1.16. Assuming engineering training accounts for the entire CRT score difference between Princeton and MIT students (another big assumption), engineering training raised their average score from 1.63 to 2.18. How am I supposed to draw conclusions from this data about which training is better for rationality?
I think (a) that there is a probably a big selection effect, and (b) that it is possible that the test used is biased in favor of mathematical rather than generally logical thinking. The CRT is also doesn’t include things like noticing when a word is meaningless, which I would think would be one of the most important skills for philosophers. I’m not sure how one would test that.
I think you’re right that the data don’t show what I had thought. I had thought that professional philosophers did worse than than MIT undergrads, but now it looks like there isn’t data about that. I think I was confusing it with the results from professional American judges (almost all graduates of the other program which claims to teach reasoning).