Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!
Also, his paper is the best, methodologically, that I’ve seen in the field of “individual rationality augmentation research”.
To summarize my (clumsy) understanding of the activity of argument mapping:
One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a “claim” node. Every claim node has approximately one sentence of english text associated. The children of a claim are “reasons”, which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim’s child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.
In elaborating the argument, it’s often necessary to insert implicit claims. This should be done abiding by the “Principle of Charity”, that you should interpret the argument in such a way as to make it the strongest argument possible.
There are two syntactic rules which can easily find flaws in argument maps:
The Rabbit Rule: Informally, “You can’t conclude something about rabbits if you haven’t been talking about rabbits”. Formally, “Every meaningful term in the conclusion must appear at least once in every reason.”
The Holding Hands Rule: Informally, “We can’t be connected unless we’re holding hands”. Formally, “Every meaningful term in one premise of a reason must appear at least once in another premise of that reason, or in the conclusion”.
I have tried the Rationale tool, and it seems afflicted with creeping featurism. My guess is the open-source tool Freemind could support argument mapping as described in Twardy’s article, if the user is disciplined about it.
I’d love comments offering alternative rationality-improvement tools. I’d prefer tools intended for solo use (that is, prediction markets are awesome but not what I’m looking for) and downloadable rather than web services, but anything would be great.
Argument Maps Improve Critical Thinking
Charles R. Twardy provides evidence that a course in argrument mapping, using a particular software tool improves critical thinking. The improvement in critical thinking is measured by performance on a specific multiple choice test (California Critical Thinking Skills Test). This may not be the best way to measure rationality, but my point is that unlike almost everybody else, there was measurement and statistical improvement!
Also, his paper is the best, methodologically, that I’ve seen in the field of “individual rationality augmentation research”.
To summarize my (clumsy) understanding of the activity of argument mapping:
One takes a real argument in natural language. (op-eds are a good source of short arguments, philosophy is a source of long arguments). Then elaborate it into a tree structure, with the main conclusion at the root of the tree. The tree has two kinds of nodes (it is a bipartite graph). The root conclusion is a “claim” node. Every claim node has approximately one sentence of english text associated. The children of a claim are “reasons”, which do NOT have english text associated. The children of a reason are claims. Unless I am mistaken, the intended meaning of the connection from a claim’s child (a reason) to the parent is implication, and the meaning of a reason is the conjunction of its children.
In elaborating the argument, it’s often necessary to insert implicit claims. This should be done abiding by the “Principle of Charity”, that you should interpret the argument in such a way as to make it the strongest argument possible.
There are two syntactic rules which can easily find flaws in argument maps:
The Rabbit Rule: Informally, “You can’t conclude something about rabbits if you haven’t been talking about rabbits”. Formally, “Every meaningful term in the conclusion must appear at least once in every reason.”
The Holding Hands Rule: Informally, “We can’t be connected unless we’re holding hands”. Formally, “Every meaningful term in one premise of a reason must appear at least once in another premise of that reason, or in the conclusion”.
I have tried the Rationale tool, and it seems afflicted with creeping featurism. My guess is the open-source tool Freemind could support argument mapping as described in Twardy’s article, if the user is disciplined about it.
I’d love comments offering alternative rationality-improvement tools. I’d prefer tools intended for solo use (that is, prediction markets are awesome but not what I’m looking for) and downloadable rather than web services, but anything would be great.