Argument Maps Improve Critical Thinking
Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!
Also, his paper is the best, methodologically, that I’ve seen in the field of “individual rationality augmentation research”.
To summarize my (clumsy) understanding of the activity of argument mapping:
One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a “claim” node. Every claim node has approximately one sentence of english text associated. The children of a claim are “reasons”, which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim’s child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.
In elaborating the argument, it’s often necessary to insert implicit claims. This should be done abiding by the “Principle of Charity”, that you should interpret the argument in such a way as to make it the strongest argument possible.
There are two syntactic rules which can easily find flaws in argument maps:
The Rabbit Rule: Informally, “You can’t conclude something about rabbits if you haven’t been talking about rabbits”. Formally, “Every meaningful term in the conclusion must appear at least once in every reason.”
The Holding Hands Rule: Informally, “We can’t be connected unless we’re holding hands”. Formally, “Every meaningful term in one premise of a reason must appear at least once in another premise of that reason, or in the conclusion”.
I have tried the Rationale tool, and it seems afflicted with creeping featurism. My guess is the open-source tool Freemind could support argument mapping as described in Twardy’s article, if the user is disciplined about it.
I’d love comments offering alternative rationality-improvement tools. I’d prefer tools intended for solo use (that is, prediction markets are awesome but not what I’m looking for) and downloadable rather than web services, but anything would be great.
- Debate tools: an experience report by 5 Feb 2010 14:47 UTC; 51 points) (
- To Learn Critical Thinking, Study Critical Thinking by 7 Jul 2012 23:50 UTC; 41 points) (
- How are critical thinking skills acquired? Five perspectives by 22 Oct 2010 2:29 UTC; 13 points) (
- 2 Sep 2010 12:32 UTC; 5 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 22 Jan 2010 18:56 UTC; 1 point) 's comment on Normal Cryonics by (
- 2 Sep 2011 10:30 UTC; 0 points) 's comment on Is Rationality Teachable? by (
- 2 Sep 2010 12:21 UTC; 0 points) 's comment on Less Wrong: Open Thread, September 2010 by (
Thanks for the vote of confidence. I should say that while I think my paper presents things well, I cannot take credit for the statistics or experimental design. Tim van Gelder already had the machinery in place to measure pre-post gains, and had done so for several semesters. The results were published in Donohue et al. 2002. The difference here was that I took over teaching, and we continued the pre-post tests.
Although argument maps are usually used to map existing natural language arguments, one could start with the map. I like to think that the more people use these maps, the more their thinking naturally follows such a structure. I’m sure I could use more practice myself.
Just a note on terminology: the tree does have two kinds of nodes, but by virtue of being a tree, it is not a bipartite graph.
I think arguments in argument maps can be made probabilistic and converted to Bayesian networks. But as it is, it takes long enough just to make an argument map. I’ve recently discovered Gheorghe Tecuci’s work. He’s just down the hall from me, but I didn’t know his work until I heard him give a talk recently. He has an elaborate system that helps analysts create structures very much like argument maps by filling in schemas, and then reasons quantitatively with them. The tree structure and the simplicity of the combination rules (min, max, average, etc.) are more limited than a full Bayesian network, but it seems to be a very nice extension of argument maps.
Here is a tool that will help figure out whether you are overconfident, underconfident, or neither.
Here is a good puzzle involving rationality. See the discussion here on Less Wrong once you’ve completed it.
those set kind of a low bar. don’t we have a test that more rigorously examines whether you infer correctly from data or not?
if anyone wants some fun (not making any claims about whether solving these indicate rationality, just for fun) try the blue eyes problem http://www.xkcd.com/blue_eyes.html and the “hardest logic puzzle in the world” http://en.wikipedia.org/wiki/The_Hardest_Logic_Puzzle_Ever
the blue eyes puzzle in particular I enjoyed immensely.
This looks like a formalised general version of an Intervention Logic, a tool used in government to explain how a proposed policy will achieve a desired policy goal.
Tell us more about this Intervention Logic.
What I’ve described below is the ideal, naturally as soon as politics gets involved in anything you can move away from the ideal rapidly and there’s no way of getting politics out of policy formation.
Say you have a policy problem to solve or a policy goal to meet (reducing road fatalities, improving high school graduation rates etc.) and you have a policy you think will work to solve the problem, but you want to check your reasoning or develop a formalised explanation so you can convince another analyst or agency. One way to do this is develop an intervention logic.
The basic format of an intervention logic is a flowchart that outlines the causal relationship between your policy and the desired outcome “This policy will cause A, which causes B, which causes C, which results in outcome Z”.
Its not a perfect system, it violates one of the cardinal rules of rationality since its generally used to justify a pre-reasoned position rather than reasoning from scratch and there’s inevitably a certain amount of handwaving involved since the causal factors involved in most policy work are very hard to get a grip on, but at least it forces the person using it to state their assumptions and logic explictily.
Semi-OT: what about software for drawing Pearlean causal graphs that permit counterfactual surgery?
Oddly enough, Twardy’s research in philosophy (as opposed to philosophy education) is related to Pearlean counterfactuals. He worked somewhat with Lucas Hope on a tool called “Causal Reckoner”.
See:
http://portal.acm.org/citation.cfm?id=1082172&dl=GUIDE&coll=GUIDE&CFID=49488126&CFTOKEN=18921351
And:
http://www.springerlink.com/content/30m8ac6uafkxu9k3/
However, my google-fu is not strong enough to find the software itself. Possibly contacting the various individuals involved would be necessary.
Luke Hope and Karl Axnick did most of the work on Causal Reckoner. I have used it, but I did very little to develop it. However, I believe it is GPL, so it could be posted.
I think so. The intervention paper links to the providing site, but the straight link is down with a server error; poking around the site reveals no other mirrors or mentions of the causal reckoner. The Internet Archive shows the pages fine, but the only download page is about getting the source via CVS—and not anonymous CVS! The CVS server doesn’t seem to expose any HTTP files either (SSH seems to be the onlyway in).
It’s too bad—I kept thinking about Eliezer’s old post about ‘what would ordinary things like sight be like if they were RPG powers/abilities’, and it seems to me like a cool concept to try out would be a game where you can literally see the causal decision graphs governing the actions of characters. Perhaps another power could be snipping branches or modifying weights to manipulate characters into doing your bidding or simply getting out of the way. (One could start off trapped in a jail cell… :)
But I’ve tried a couple ways to view the PDF and I can’t seem to see the screenshot of the GUI! Now that’s annoying.
I don’t suppose there are any non-paywall versions?
Let me google that for you:
The 1st
http://crpit.com/confpapers/CRPITV38Marriott.pdf
The 2nd
http://www.csse.monash.edu.au/~korb/pubs/intervene.pdf
Very-OT: have you guys seen the awesome lmgtfy?
I’ve heard of the game WFF’n’Proof ever since it was invented, but I’ve never had any closer knowledge of it. However, the publisher’s website claims dramatic improvements in IQ and mathematical performance from playing that and their other games.
WFF’n’Proof and Equations are the ultimate geek games. Fun, too, with the right crowd.
Here’s an absolutely phenomenal tool for creating diagrams of whatever level complexity. yEd Graph Editor even has auto-layout which creates graphs with least overlap. http://www.yworks.com/en/products_yed_about.html
I think focusing on the tool here is misleading. It is the process of creating something like a dependency graph representing the entire argument (and knowing how to analyze that graph) that is the important point, and people have been doing that for almost as long as philosophy has been around. Every critical thinking class teaches such techniques.