Reasoning isn’t about logic (it’s about arguing)
“Why do humans reason” (PDF), a paper by Hugo Mercier and Dan Sperber, reviewing an impressive amount of research with a lot of overlap with themes previously explored on Less Wrong, suggests that our collective efforts in “refining the art of human rationality” may ultimately be more successful than most individual efforts to become stronger. The paper sort of turns the “fifth virtue” on its head; rather than argue in order to reason (as perhaps we should), in practice, we reason in order to argue, and that should change our views quite a bit.
I summarize Mercier and Sperber’s “argumentative theory of reasoning” below and point out what I believe its implications are to the mission of a site such as Less Wrong.
Human reasoning is one mechanism of inference among others (for instance, the unconscious inference involved in perception). It is distinct in being a) conscious, b) cross-domain, c) used prominently in human communication. Mercier and Sperber make much of this last aspect, taking it as a huge hint to seek an adaptive explanation in the fashion of evolutionary psychology, which may provide better answers than previous attempts at explanations of the evolution of reasoning.
The paper defends reasoning as serving argumentation, in line with evolutionary theories of communication and signaling. In rich human communication there is little opportunity for “costly signaling”, that is, signals that are taken as honest because too expensive to fake. In other words, it’s easy to lie.
To defend ourselves against liars, we practice “epistemic vigilance”; we check the communications we receive for attributes such as a trustworthy or authoritative source; we also evaluate the coherence of the content. If the message contains symbols that matches our existing beliefs, and packages its conclusions as an inference from these beliefs, we are more likely to accept it, and thus our interlocutors have an interest in constructing good arguments. Epistemic vigilance and argumentative reasoning are thus involved in an arms race, which we should expect to result in good argumentative skills.
What of all the research suggesting that humans are in fact very poor at logical reasoning? Well, if in fact “we reason in order to argue”, when the subjects are studied in non-argumentative situations this is precisely what we should expect.
Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others’ arguments. M&S also plead for the “rehabilitation” of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.
If reasoning is a skill evolved for social use, group settings should be particularly conducive to skilled arguing. Research findings in fact show that “truth wins”: once a group participant has a correct solution they will convince others. A group in a debate setting can do better than its best member.
The argumentative theory, Mercier and Sperber argue, accounts nicely for motivated reasoning, on the model that “reasoning anticipates argument”. Such anticipation colors our evaluative attitudes, leading for instance to “polarization” whereby a counter-argument makes us even more strongly believe the original position, or “bolstering” whereby we defend a position more strongly after we have committed to it.
These attitudes are favorable to argumentative goals but actually detrimental to epistemic goals. This is particularly evident in decision-making. Reasoning appears to help people little when deciding; it directs people to the decisions that will be easily justified, not to the best decisions!
Reasoning falls quite short of reliably delivering rational beliefs and rational decisions, [and] may even be, in a variety of cases, detrimental to rationality.
However, it isn’t all bad news. The important asymmetry is between production of arguments, and their evaluation. In groups with an interest in finding correct answers, “truth wins”.
If we generalize to problems that do not have a provable solution, we should expect, if not necessarily truth, at least good arguments to win. [...] People are quite capable of reasoning in an unbiased manner at least when they are evaluating arguments rather than producing them and when they are after the truth rather than after winning a debate.
Becoming individually stronger at sound reasoning is possible, Mercier and Sperber point out, but rare. The best achievements of reasoning, in science or morality, are collective.
If this view of reasoning is correct, a site dedicated to “refining the art of human rationality” should recognize this asymmetry between producing arguments and evaluating arguments and strive to structure the “work” being done here accordingly.
It should encourage individual participants to support their views, and perhaps take a less jaundiced view of “confirmation bias”. But it should also encourage the breaking down of arguments into small, separable pieces, so that they can be evaluated and filtered individually; that lines up with the intent behind “debate tools”, even if their execution currently leaves much to be desired.
It should stress the importance of “collectively seeking the truth” and downplay attempts at “winning the debate”. This, in particular, might lead us to take a more critical view of some common voting patterns, e.g. larger number of upvotes for snarky one-liner replies than for longer and well-thought out replies.
There are probably further conclusions to be drawn from the paper, but I’ll stop here and encourage you to read or skim it, then suggest your own in the comments.
- How to enjoy being wrong by 27 Jul 2011 5:48 UTC; 30 points) (
- 9 Sep 2010 22:11 UTC; 19 points) 's comment on Humans are not automatically strategic by (
- 11 Apr 2012 2:26 UTC; 8 points) 's comment on Left-wing Alarmism vs. Right-wing Optimism: evidence on which is correct? by (
- 8 Jan 2011 9:39 UTC; 7 points) 's comment on Link: “When Science Goes Psychic” by (
- 17 Jun 2010 11:43 UTC; 6 points) 's comment on Open Thread June 2010, Part 3 by (
- 18 Mar 2010 7:45 UTC; 5 points) 's comment on Sequential Organization of Thinking: “Six Thinking Hats” by (
- 13 Mar 2010 22:43 UTC; 5 points) 's comment on Rationality quotes: March 2010 by (
- 17 Mar 2010 13:51 UTC; 4 points) 's comment on Overcoming the mind-killer by (
- 19 Sep 2010 16:31 UTC; 2 points) 's comment on The Meaning of Life by (
- Preface to a Proposal for a New Mode of Inquiry by 17 May 2010 2:11 UTC; 2 points) (
- 18 Apr 2012 21:14 UTC; 1 point) 's comment on [link] Why We Reason (psychology blog) by (
- 28 Feb 2015 15:50 UTC; 1 point) 's comment on What subjects are important to rationality, but not covered in Less Wrong? by (
- 19 Jul 2015 15:06 UTC; 0 points) 's comment on Philosophy professors fail on basic philosophy problems by (
- 27 May 2015 6:46 UTC; 0 points) 's comment on Rationality is about pattern recognition, not reasoning by (
In general, I think LessWrong.com would benefit from conspicuous guidelines: a readily-clickable FAQ or User’s Guide that describes posting etiquette and relevance criteria, and general mandatey stuff.
I encourage everyone to look at the example of http://MathOverflow.net/, a web community for mathematicians that started with a few graduate students just half a year ago and has grown immensely in size and productivity since then (notably , enjoying regular contributions from Fields Medalist Terrence Tao).
Not only do they have an FAQ, but a clearly distinguished ongoing Meta forum that was used extensively in its early development to analyze site policies:
http://mathoverflow.net/faq http://meta.mathoverflow.net/
If we did discover a cognitive trick for making people collectively reason better, a sentence about it in an FAQ could work wonders.
Nice link. That could be very useful for me. Thanks.
I observed some time ago that Roger Penrose seemed to be a much better explainer of physics when he was using it to argue something (even though the conclusion was completely bogus) than people who graciously write textbooks that will be required reading for the students who have to buy it.
If you want good textbooks, make sure the author is trying to persuade the students of something, I’d say. I usually am.
Perhaps the process of writing should be separated from the product of writing (i.e. the textbook). The best of both worlds surely is a textbook that doesn’t try to persuade at all (since persuasion is tangential to providing an explanation), but which was written with a process involving a lot of arguing (to help stimulate the best reasoning). My brother and I sometimes had heated arguments when we wrote C# 3.0 in a Nutshell, with numerous “red ink revisions” before finally settling on the NPOVish text the reader sees.
Maybe that explains why Wikipedia is usually much clearer to read (IMO) than professionally produced encyclopedias.
From the Why do humans reason paper:
I’ve purchased two of biology-related books by creationist-leaning authors based on this logic. They are still on the stack, so I can’t say how the experiment worked out.
4.7 years later… Did you ever read them?
I’m sure this must have been stated before this post or probably already well known in these whereabouts, but this is just a brilliant and highly useful insight.
I guess this is the reason why “explain your problem to the rubber duck before asking humans” works.
Many times I have wandered into IRC with a programming problem, and the very moment I hit enter to send it out and read it on the screen as part of the conversation, the answer occurs to me; the same answer I’d give anybody else in that room asking the question.
Maybe I should rig my IRC client to delay actually sending the first message to a room...
What makes you think that wouldn’t delay you coming up with the answer?
I was thinking that the message would appear to have been sent to the chat room as soon as I hit enter, but not actually be sent to the IRC server until later. If the hack is a matter of seeing the question as though it were someone else’s question, then that would work. If the hack requires that I genuinely believe that other people in the room have received the question, it wouldn’t work as well… but might still work somewhat, if the delay behavior were unobtrusive enough that I partially forgot about it.
Part of the problem is that we have a much better worked out theory of reasoning than of arguing. So we are tempted to apply our theory of reasoning to evaluate our arguments, where we should prefer to apply a theory of arguing. So what we need is a better theory of arguing—what counts as a good argument, a good reply, etc.
I partake in British Parliamentary Debate. A good argument:
Is structured like an essay: tell what are you going to tell, tell it, then tell what you just told.
Consists of a description, elaboration, and an example.
Elaboration consists of a chain of logic that starts at the position being defended and terminates at a terminal value.
A counterargument either:
Attacks the premises or a link in the chain of logic and shows that the argument leads somewhere other then the terminal value.
Proves that the terminal value isn’t actually terminal.
Constructs an alternative argument that leads to an even more important terminal value and does so to make it and the original argument mutually exclusive.
A good counter argument is concise.
For example: this house would force everyone to publish their income on the Internet.
This motion would lessen corruption by crowdsourcing police. Any person could go online and compare their neighbor’s apparent wealth to their stated income and raise an alarm should a disparity be found. The neighbor would of course know this and thus would not dare evade taxes or whatever. So we have less corruption, less people in jail due to deterrence, more taxes, and less strain on our actual police!
Attack premises: most people live in big cities in relative anonymity, neighbors don’t know each other, and wealth isn’t conspicuous.
Attack logic 1: government websites are hardly a popular destination. People simply wouldn’t care to go through tables of numbers.
Attack logic 2: people would just spend their ill gotten gains inconspicuously. (counter-counterargument: wealth is about signaling status which must be visible)
Alternative: this is a huge infraction on people’s privacy which is more important than lessening corruption. (This one should be more elaborate but I’m out of steam.)
Note though that British Parliamentary Debate is about winning and not truth.
What exactly do you mean by good?
I can’t speak for Robin, of course, but I have a guess:
A good argument is an argument with money attached, as part of a bet or prediction market.
Wouldn’t that be even worse? If we currently mostly don’t use good reasoning for individual truth seeking, but in at least occasionally use good reasoning to argue, wouldn’t developing a theory of arguing contribute to displacing that, resulting in even less good reasoning? Or do you think that reasoning would become better for truth seeking if it was freed from the other optimization goal of being good for arguing?
I’ve been chewing on this question for a while.
This WP article could serve as a starting point—though it looks a little daunting. It makes a lot of a Stephen Toulmin’s “six elements of an argument”—I see that Toulmin hasn’t been discussed on LW so far. I’ll see if I can get some info, summarize and evaluate the usefulness of that framework.
A proposal in line with M&S would be: a good argument is one that causes your interlocutor to accept your conclusion. A good counter-argument is one that justifies your rejecting your interlocutor’s conclusion. This conforms to the hypothesis that reason serves argument, and that its twin functions are to help us convince others and to resist being convinced.
I’m also wondering about a “memetic theory of argumentation”, where an argument spreads by virtue of convincing others, and mutates to become more convincing. Our “rules” for correct argumentation are themselves but memetic fragments that “ally” with others to increase their force of conviction. For instance, “we should reject ad hominem arguments” is a meta-argument which, if we expect that our interlocutors are likely to use it to reject our conclusions, we will avoid using for fear of making a poor initial argument. In this manner we might expect to see an overall increase in the “fitness” of arguments as a consequence of the underlying arms race.
We should also be careful to distinguish conversation from argument, I see the former as serving an entirely different purpose.
To win at life you must be willing to lose at debate.
Please improve upon this slogan/proverb.
Learn to enjoy being proven wrong, or you’ll never learn anything.
If you never lose an argument, then you need to find some better arguments.
Winning an argument is satisfying; losing an argument is productive.
One suggestion would be to avoid commenting too much by having a/some friends evaluate the content of your comment before posting here. Her reasoning will see your blind spots. This would also diminish the total number of comments. This in turn will lead to more people reasoning about a particular comment.
One thing to which solutions are needed is: What is a better system than Karma to encourage status-seeking-rational primates to read the blog, but at the same time to avoid over-commenting to get more points?
I see the karma system as benign in this respect, because people love to argue anyway (M&S also mention that in their conclusions). They would do it without a karma outcome, so adding karma seems unlikely to affect the overall number of comments much.
There are punctual exceptions. As the “Spring meta thread” shows (or the pictures thread in the babies and bunnies post), people seem to need opportunities to just goof off. This is fine but it has the unfortunate effect of being a distraction to those who (like me, I’m afraid) prefer the more “serious” stuff.
I downvoted some of the comments in the “This is a comment” thread, but clearly enough other folks disagree with me. Here again M&S have something useful to say: the group isn’t always right, and not all group processes track truth; only those which foster production and evaluation of arguments.
So, perhaps what we need is a dual system, with separate votes for “I like/dislike this” one one hand, and “Good/stupid point” on the other hand.
I don’t think that follows. I enjoy drinking soda, but if someone gave me $5 every time I did it, I would drink even more soda.
More directly, as a data point, I find myself commenting more on Less Wrong than on other blogs because of the karma points system. However, an even bigger motivator for me than that is the response notification; if not for that, I would almost never comment on old threads, or post comments specifically inviting response.
If people started taking your first advice, comment quality would go up, and downvoting might be a little more prevalent, discouraging at least “pointless” posts. Also, posts in clear violation of guidelines would not happen much, unless there was visibly an exceptional benefit, lest they draw downvotes. Judging from MathOverflow (see my other comment), the downvoting doesn’t devolve into fearmongering, either, just a healthy immunity to trolling and time-wasting.
This is great Meta material ;)
Systematically downvoting comments like mine right here.
What about your comment makes it worthy of downvoting? Is it that it’s meta, or that it doesn’t add content to the conversation, or just because you asked us to?
I can’t for the life of me remember making this comment.
I’d guess that it was meta; it was semi-smart, but not very productive, so it deserved to be downvoted, which is what it asked for, and which it got. Basically, smart-sounding comments with tenuous relevance are a waste—the smartness should not be a shield.
I use to people’s bias as evidence for manipulation and deception, now I see it as naivity. I reframe the situation and have gone from frustrated, angry and contemptful, to thinking of them …well I guess now I just don’t think too much about it.
Yes, reasoning is about winning arguments, but it is also about logic! For our logic, and rules of reasoning, themselves organize our discourse into war games.
Our antagonistic rules of logic (where ideas, perspectives, etc., have to compete with each other for truth, and where logical might makes right), do not serve us when the goal is clear, differentiated, complex, high quality thinking. Traditionally competitive, antagonistic reasoning introduces an extraneous motivation (to win) that diverts cognitive capacities/resources from the actual task of thinking or reasoning, and is detrimental to high-quality complex, differentiated, as well as to courageously explorative, reasoning/thinking. (In an emotional-relational climate of mutual attack—built into our logic and rules of argument—we produce polarized knee-jerk opinions or resort to “safe” easily fortifiable/defendable positions). (See Kohn, A. [1992] No Contest. The case against competition. Why we lose in our race to win.)
What generates the highest quality thinking, and is most inspiring and motivating to those participating, is what Kohn has called “cooperative conflict” (in contrast to both antagonistic, oppositional conflict and to simple consent). That is, a framework of cooperation, whithin which contrasting (or “contradicting”) ideas, theories, views, opinions, arguments, etc. are cooperatively explored, as a shared project or challenge, to come to a more differentiated view together.
This isn’t true, sometimes reasoning IS about logic:
“”Argumentation ethics asserts the non-aggression principle is a presupposition of every argument and so cannot be logically denied during an argument. Argumentation ethics draws on ideas from Jürgen Habermas’s and Karl-Otto Apel’s discourse ethics, from Misesian praxeology and from the political philosophy of Murray Rothbard.
Hoppe first notes that when two parties are in conflict with one another, they can choose to resolve the conflict by engaging in violence, or engaging in argumentation. In the event that they choose to engage in argumentation, Hoppe asserts that the parties have implicitly rejected violence as a way to resolve their conflict. He therefore concludes that non-violence is an underlying norm (Grundnorm) of argumentation, that is accepted by both parties.
Hoppe states that, because both parties propound propositions in the course of argumentation, and because argumentation presupposes various norms including non-violence, the act of propounding a proposition that negates the presupposed propositions of argumentation is a logical contradiction between one’s actions and one’s words (this is called a performative contradiction). Specifically, to argue that violence should be used to resolve conflicts (instead of argumentation) is a performative contradiction.[3]”″