To Learn Critical Thinking, Study Critical Thinking
Critical thinking courses may increase students’ rationality, especially if they do argument mapping.
The following excerpts are from “Does philosophy improve critical thinking skills?”, Ortiz 2007.
1 Excerpts
This thesis makes a first attempt to subject the assumption that studying [Anglo-American analytic] philosophy improves critical thinking skills to rigorous investigation.
…Thus the second task, in Chapter 3, is to articulate and critically examine the standard arguments that are raised in support of the assumption (or rather, would be raised if philosophers were in the habit of providing support for the assumption). These arguments are found to be too weak to establish the truth of the assumption. The failure of the standard arguments leaves open the question of whether the assumption is in fact true. The thesis argues at this point that, since the assumption is making an empirical assertion, it should be investigated using standard empirical techniques as developed in the social sciences. In Chapter 4, I conduct an informal review of the empirical literature. The review finds that evidence from the existing empirical literature is inconclusive. Chapter 5 presents the empirical core of the thesis. I use the technique of meta-analysis to integrate data from a large number of empirical studies. This meta-analysis gives us the best yet fix on the extent to which critical thinking skills improve over a semester of studying philosophy, general university study, and studying critical thinking. The meta-analysis results indicate that students do improve while studying philosophy, and apparently more so than general university students, though we cannot be very confident that this difference is not just the result of random variation. More importantly, studying philosophy is less effective than studying critical thinking, regardless of whether one is being taught in a philosophy department or in some other department. Finally, studying philosophy is much less effective than studying critical thinking using techniques known to be particularly effective such as LAMP.
…To establish whether and how improvement is in fact occurring in students’ CTS, we need ways of measuring such improvement. In other words, we have to be able to operationalize CTS . Several attempts to devise such measures have been made. They include common written tests (essays), direct classroom observations, individual interviews, and student and teacher journals. Although techniques for gathering information on students’ CTS can come in a variety of forms, the most objective, standardized measures, however, are multiple-choice tests (e.g. Watson-Glaser Critical Thinking Appraisal, Cornell Critical Thinking Test, California Critical Thinking Skills Test, and the College Assessment of Academic Proficiency tests). The objectivity of these tests is based on the fact they have received, more than other CT measuring techniques, constant evaluation with regard to two major indicators of quality information in educational evaluation: validity and reliability . As regards to this, two experts on CT evaluation have said that “these tests have been carefully developed and are likely to serve more effectively than hastily constructed instruments” (Norris & Ennis, 1990, p.55). …Therefore, we have focused on research studies that have measured CT gain by pre and post-testing of skills, using the standard CTS tests mentioned. This means, studies that have measured students’ CTS at the beginning of a course or academic period and then again at the end of it. In this sense, CTS improvement means to get better on CTS tests within a given, standard time interval.
…The crucial link, however, is that, when well taught in philosophy, students are required to extensively practice the core skills of reasoning and making or analysing arguments – the central CT skills. Such practice is highly likely to lead to improvement in reasoning and argument skills, and thus to gains in CT abilities. This is a vital consideration, as we shall see; since it is the nature of such practice, its quality and quantity, rather than the study of philosophy per se, which turns out to be the indispensable condition for the improvement of CTS. There is also an additional element in support for the claim that philosophy tends to help improve CTS. It is the fact that philosophy students receive the guidance of experts in reasoning – that is what professors of philosophy are. This element requires special attention.
…We shall turn, shortly, to a careful review of the existing evidence, but because the findings of this review are s central to the present thesis, let us summarize them briefly first. With the exception of the Philosophy for Children program (P4C), there has been strikingly little interest in empirical evaluation by philosophers of the extent to which their discipline actually enhances critical thinking. At present, the volume of published work in connection with P4C pedagogy shows a positive impact on children’s reasoning skills (Garcia-Moriyon, Rebollo, & Colom, 2005). Two recent meta-analyses support this claim…Two meta-analyses have also been conducted to evaluate the program (Garcia-Moriyon et al., 2005; Trickey & Topping, 2004). In the Trickey & Topping meta-analysis, similar “positive outcomes“ were found. The review included ten studies (three of which are included in the Montclair list) measuring outcomes in reading, reasoning, cognitive ability, and other curriculum-related abilities. The study showed a effect size of 0.43 , with low variance, indicating a consistent moderate positive effect for P4C on a wide range of outcomes (p.365). In Garcia-Moriyon’s meta-analysis, 18 studies fit the following criteria: (1) studies testing the effectiveness of P4C in improving reasoning skills or mental abilities and (2) studies including enough statistical information to calculate the magnitude of effects sizes. All effect sizes except one are positive and the estimation of the mean effect size yielded a value of 0.58 (p< .01; CI .53-.64), indicating that P4C has a positive effect on reasoning skills14. (p.21)..of the 116 studies selected for the meta-analysis by Garcia-Moriyon et al., he and his colleagues had to exclude the vast majority for not meeting the minimum criteria related to research design, data analysis, and reporting. Only 18 of the 116 fitted the criteria. This deficiency in so much in the P4C research has led Reznitskaya, 2005, to draw particular attention to the lack of appropriate statistical procedures in the great majority of these studies.
…With regard to undergraduate and graduate students, only a few studies of the impact of philosophy on critical thinking were found. This is despite the fact that philosophy departments in Western universities commonly claim that philosophy has this effect. There is also a striking contrast between the abundant research regarding the impact of social sciences, especially psychology and education, or nursing, for example, on critical thinking and the paucity of research done on philosophy (and, for that matter, the hard sciences) in this regard.
…I have been able to discover only a small number of studies (five) which have sought to directly measure the relationship between the study of philosophy by undergraduates and the development of CTS. Even within this group, there were distinct variations in research design. What sets them apart, however, is that, albeit in different ways, they were all concerned to measure the link between studying philosophy at college and improving CTS. In all (Annis & Annis, 1979; Harrell, 2004; Reiter, 1994; Ross & Semb, 1981) but one (Facione, 1990) of these cases, the philosophy students belonged to the experimental group.
…The specific study by Ross & Semb 1981 raised a suggestion that is surely worthy of closer and more serious analysis. These authors indicated that the gains in CTS due to studies in philosophy appeared to depend on the method of instruction. The implication here is that traditional philosophy (lectures, discussions) does not generate the high degree of effectiveness in improving CT abilities that are achievable with more innovative methods of teaching. This fourth point needs amplification. Ross and Semb set up a second experiment, in which the experimental group (following the Keller Plan) were encouraged to concentrate on informal argumentation and the comprehension of written texts with personalized instruction (PSI), while the control group were taught by traditional means of lectures and discussion. They found that the experimental group made greater gains in CTS than did the control group. Harrell, in 2004, established a similar correlation using argument mapping in the experimental group. Her conclusion was: “While, on average, all of the students in each of the sections improved their abilities on these tasks over the course of the semester, the most dramatic improvements were made by the students who learned how to construct arguments.” (p. 14) …This fourth point is further underscored by the fact that consistently positive results in CT gain have been provided by critical thinking courses offered by Western Philosophy Departments (Butchard, 2006; Donohue et al., 2002; Hitchcock, 2003; Rainbolt & Rieber, 1997; Spurret, 2005; Twardy, 2004). These courses are exceptional, in that they are not standard philosophy courses. They are focused on teaching critical thinking skills in their own right. Their positive results reinforce the idea that innovative methods of teaching within philosophy departments might produce better results in the teaching of critical thinking skills. However, we cannot infer from that that philosophy per se improves critical thinking skills …While Panowitsch and Balkcum were interested in the implications of these results concerning the development of moral judgment and the utility of the DIT, their findings also showed something else, which interests us: that a course in Formal Logic yields greater CT gains than a course in Ethics. In other words, the kind of philosophy course one studies makes a difference to the gain in CTS…There are, in addition, some studies which have sought to explore the relationship between Logic and the development of reasoning skills. In so far as reasoning and CT skills are not wholly co-terminous, these studies could not resolve the debate about the relationship between Logic and CTS, even were they definitive in themselves. As it happens, their findings, like those of a number of the studies already considered, are not conclusive even within their prescribed domain. Some of these studies seem to indicate that the study of Logic produces statistically significant gains in reasoning skills (Stenning, Cox, & Oberlander, 1995; Van der Pal & Eysink, 1999). Conversely, others indicate that the study of the abstract principles of logic produces no discernible improvement in reasoning (Cheng, Holyoak, Nisbett, & Oliver, 1986).
…There appear to have been only three studies that have tried to estimate the growth in reasoning and CTS in students who have completed, or are close to having completed, an undergraduate major in philosophy. Two of these studies (Hoekema, 1987; Nieswiadomy, 1998) were centred on determining the level of competence in reasoning abilities of such students seeking to enter various graduate schools at university. Although these studies suggest that philosophy majors generally do better on graduate tests (GMAT, LSAT, GRE), they do not disentangle the contribution of philosophy from the students’ selection effect. People that choose to study philosophy may, in general, be good at reasoning anyway and the evidence these two studies provide does not indicate to what extent the study of philosophy in itself improves reasoning skills.
…In summary, we classified the studies into seven groups. These groups will make it possible for us to measure the impact of the two major independent variables selected for the purposes of this inquiry: the amount of philosophy and CT instruction the students have received. These groups are as follows:
Courses offered by philosophy departments consisting of formal instruction in Anglo-American analytic philosophy, or what I shall call ‘pure philosophy’ (Pure Phil).
Critical thinking courses offered by philosophy departments with no instruction in argument mapping (Phil CT No AM).
Critical thinking courses offered by philosophy departments with some instruction in argument mapping (Phil CT-AM).
Critical thinking courses offered by philosophy departments with lots of argument mapping practice (Phil LAMP). These are courses fully dedicated to teaching CT with argument mapping.
Courses offered by non-philosophy departments and wholly dedicated to explicit instruction in CT (No Phil, Ded-CT).
Courses offered by non-philosophy departments with some form of conventional CT instruction embedded (No Phil, Some-CT).
Courses offered by non-philosophy departments with no special attempts being made to cultivate CT skills (No Phil, No-CT).
Ortiz gives what is a fairly good overview of what a meta-analysis is and how you would do a basic one; I’ll skip over this material. The graphical results are on pg82: All 7 categories give positive effect sizes; the smallest difference between the improvements exhibited by subjects in the controls (#7) and subjects in the the various philosophy approaches (#1-6) is: d=0.09 (#1,6); the largest, d=0.63 (#4).
…The Keller Plan and Logic courses yielded results so much better than the other Introduction to Philosophy courses that they inflated the results for this pool of studies. In fact, if these two are separated out, the difference made to CT skills by the Introduction to Philosophy courses becomes appreciably smaller (ES = 0.19 SD). What is more important, however, is the intriguing implication that particular approaches to teaching made a more substantial difference than the subject matter in itself. As we have seen, this turns out to be true in regard to other approaches, also. (See section 4.3, “Evidence from Undergraduate Students“, in the Review of the Existing Evidence) …The results of the Michigan Meta-Analysis of Research on Instructional Technology in Higher Education (Wittrock, 1986) found a “moderate” (medium) effect size (.49) for students in Keller Plan courses. Although this effect size was described as ‘moderate’, it was found to be greater than those for other teaching strategies, such as Audio-Tutorials (.20), Computer based-teaching (.25), Program Instruction (.28) and Visual-Based Instruction (.15). What all this suggests is that, while studying philosophy will make some difference, a greater difference will be made if one does either of two things: concentrate on Logic itself or keep the broader content, but change the way you teach it.
…The discussion about the net effect of the study of philosophy on the development of CTS also leads us to ask ourselves: To what extent would those philosophy students have made the gains they did had they not been studying Anglo-American analytic philosophy? Or to put it another way, is the gain due to philosophy, or would it have been made by students in the normal course of events, spending a semester in any serious discipline? Whence the question, Does (Anglo-American analytic) philosophy improve critical thinking skills over and above university education in general? In order to answer this question, we must first establish what difference a university education in general actually makes.
We have data for this, but the data on its own is insufficient. What our data shows is that a university education, in general, produces a gain of 0.12 of an SD over any given semester. Superficially, this compares unfavourably with the gain of 0.26 SD in a semester for philosophy. We have noted several caveats with respect to the results for philosophy. We need to ask here what the 0.12 SD for university education in general actually means. As it happens, those not attending university at all appear to improve their CTS by 0.10 SD in the equivalent of the first semester after leaving school (Pascarella, 1989). This would suggest that attending university, at least initially, makes no appreciable difference, because CTS improve at that age anyway by much the same amount.
We need, however, to be a little cautious in drawing conclusions here, since Pascarella’s own studies suggested an improvement of 0.26 SD in the first semester of university – the improvement our own data from the meta-analysis suggest is achieved by students in philosophy. What does all this mean? In fact, statistically speaking, as explained briefly above (5.4 Results section) once you allow for confidence intervals, there is not much to choose between gains of 0.1, 0.12 and 0.26. In short, whether we use Pascarella’s data alone or the more systematic data deriving from the meta-analysis, it appears that there is little to choose between not going to university, going to university in general and studying (pure) Philosophy at university, as regards improvements in CTS in a given semester period. This is both a counterintuitive and even a disconcerting conclusion.
The results seem sensible to me. I often reflected while earning my own philosophy degree that of all the courses, the one I valued and drew on most was a course on ‘contemporary philosophy’ where the sole activity was the teacher handing us a syllogism on, say, personal identity, asking whether it was logically valid and when it finally was, what premise we would reject, and why—which seems similar to the descriptions of the contents of critical thinking courses. After that, I valued the symbolic logic courses, and then a Pre-Socratics course because I liked them a lot; but I did not think I gained very much from the more ‘historical’ courses on, say, Descartes. A good critical thinking course sounds like being taught on the core philosophical approach: taking little for granted, constructing arguments, and examining each piece critically. So it does not surprise me much that time spent on critical thinking and argument mapping will produce more gains than time spent on topics like ethics.
This also suggests that the ‘great books’ method is not suitable for learning critical thinking or rationality, at least for freshmen college students (the principal focus of the studies).
2 See Also
According to Google Scholar, Ortiz’s thesis has been cited once, by inclusion into the bibliography of a thesis discussing religion education; a Google search turned up citation in another paper, on understanding psychology questions, which did not seem very useful reading. Googling, I did find an ebook on argument mapping which may be of use.
Previous argument mapping discussion:
-
“Argument Maps Improve Critical Thinking”:
Charles R. Twardy provides evidence that a course in argument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!…
To summarize my (clumsy) understanding of the activity of argument mapping: One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a “claim” node. Every claim node has approximately one sentence of English text associated. The children of a claim are “reasons”, which do NOT have English text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim’s child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.
-
“How are critical thinking skills acquired? Five perspectives”:
How are critical thinking skills acquired? Five perspectives: Tim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don’t work, and one with which he and hundreds of his students have had significant success. [LAMP/argument mapping]
Relevant links:
- Learning critical thinking: a personal example by 14 Feb 2013 20:43 UTC; 59 points) (
- 4 Jan 2013 15:41 UTC; 27 points) 's comment on How to Teach Students to Not Guess the Teacher’s Password? by (
- 17 Dec 2012 22:33 UTC; 4 points) 's comment on [Link] Rethinking the way colleges teach critical thinking by (
- 7 Jul 2012 23:52 UTC; 2 points) 's comment on Is Rationality Teachable? by (
- 23 Oct 2011 3:06 UTC; 2 points) 's comment on 11 Less Wrong Articles I Probably Will Never Have Time to Write by (
- 7 Jul 2012 23:52 UTC; 1 point) 's comment on How are critical thinking skills acquired? Five perspectives by (
- Enjoy solving “impossible” problems? Group project! by 18 Aug 2012 0:20 UTC; -2 points) (
- 13 Jul 2012 17:26 UTC; -2 points) 's comment on Steam Greenlight by (
Stephen Law vs. William Lane Craig Debate: Argument map.
I find this bouncy animation with badly anti-aliased text to be ridiculously unreadable. It feels like being spoon-fed an argument … but having the spoon yanked away before one can actually eat from it: it manages to simultaneously feel like it is insulting my intelligence and not giving me a chance to comprehend. A single big flat diagram would be preferable in every regard.
Move your mouse to the right of the screen and zoom out—it doesn’t have to be an animation. And you can drag to navigate.
Something else to consider.
I think the method of testing deserves emphasis. I’m uncertain how CTS multiple-choice test performance would impact instrumental rationality. Relevant evidence would be appreciated.
Perhaps something in the vein of http://lesswrong.com/lw/ahz/cashing_out_cognitive_biases_as_behavior/ ?
Downvoted; rather obvious result, as you yourself point out.
While the result does seem quite intuitive in retrospect, that may be due to hindsight bias. It is also worth nothing that things that “everyone knows” are often wrong, so it is worth testing whether they are in fact true.
Sometimes the apparently obvious is worth saying.
Upvoted to counteract your downvote. The result may be obvious to you, but (as the article points out) the claim that studying philosophy is worthwhile because it improves critical thinking is a very common one.
Suppose Rhawawn knew in advance that you would upvote the post if and only if Rhawawn downvoted it. Then if Rhawawn does nothing, nothing happens; if Rhawawn downvotes, nothing happens. No information can enter the system. Please don’t let other people’s voting influence your own, it dilutes the amount of influence they have. You should judge each post on its merits and vote accordingly.
I would do that if upvotes and downvotes were shown separately, but as long as only the difference is shown, if there’s a reasonable but not terribly interesting comment at 1 I leave it alone, if it’s at −2 I upvote it and if it’s at 16 I downvote it.
This strategy only seems to make sense if I believe that the total karma score of a comment is intended to reflect the number of people who happened to read it and think it significantly above/below average, rather than reflect the relative quality of the comment. For example, if I believe that a comment ranking 20 isn’t necessarily considered any better by its readers than one ranking 2, it may just have ten times as many readers (e.g., because it’s part of a popular thread).
If that weren’t the case, it would seem to follow that each voter wasn’t voting independently, but was instead taking other voters’ behavior into account.
But I observe that many people do treat a comment ranking 20 as having been judged as better by its readers than a comment ranking 2.
Which suggests the situation is more complicated than how you present it here.
Indeed it is complicated. I’ve previously contemplated voting down posts that I like a lot in order to encourage other folks to voted them back up to their “deserved” level. Then I could reverse my vote when most other folks had stopped voting. This would double the impact of my vote.
Sure, that’s a complicated-but-likely-efficient strategy for doubling the impact of your vote.
This sort of thing always strikes me as silly, though. If what I want to do is maximize my impact on the karma scores of comments, it’s easier to just create lots of dummy accounts and vote with them.
Yes, yes, I understand that this is against the local conventions of the site, and I’m not suggesting that people do it. However, it is perhaps instructive to consider why those conventions exist, and whether the existence of those reasons has further implications in terms of what sort of behavior is desirable.
From my perspective, the whole notion of trying to maximize the impact of my vote distorts the purpose of a system intended to communicate information about collective preferences. But of course nobody is obligated to share my beliefs about the purpose of the karma system, or to value that purpose even if they do.
I was going to criticise this, but then I thought, “well, serves people right for voting tactically”.
Gotta say though, if everyone did that, we’d have a huge mess.