Against Devil’s Advocacy
From an article by Michael Ruse:
Richard Dawkins once called me a “creep.” He did so very publicly but meant no personal offense, and I took none: We were, and still are, friends. The cause of his ire—his anguish, even—was that, in the course of a public discussion, I was defending a position I did not truly hold. We philosophers are always doing this; it’s a version of the reductio ad absurdum argument. We do so partly to stimulate debate (especially in the classroom), partly to see how far a position can be pushed before it collapses (and why the collapse), and partly (let us be frank) out of sheer bloody-mindedness, because we like to rile the opposition.
Dawkins, however, has the moral purity—some would say the moral rigidity—of the evangelical Christian or the committed feminist. Not even for the sake of argument can he endorse something that he thinks false. To do so is not just mistaken, he feels; in some deep sense, it is wrong. Life is serious, and there are evils to be fought. There must be no compromise or equivocation, even for pedagogical reasons. As the Quakers say, “Let your yea be yea, and your nay, nay.”
Michael Ruse doesn’t get it.
When I was a kid and my father was teaching me about skepticism -
(Dad was an avid skeptic and Martin Gardner / James Randi fan, as well as being an Orthodox Jew. Let that be a lesson on the anti-healing power of compartmentalization.)
- he used the example of the hypothesis: “There is an object one foot across in the asteroid belt composed entirely of chocolate cake.” You would have to search the whole asteroid belt to disprove this hypothesis. But though this hypothesis is very hard to disprove, there aren’t good arguments for it.
And the child-Eliezer asked his mind to search for arguments that there was a chocolate cake in the asteroid belt. Lo, his mind returned the reply: “Since the asteroid-belt-chocolate-cake is one of the classic examples of a bad hypothesis, if anyone ever invents a time machine, some prankster will toss a chocolate cake back into the 20th-century asteroid belt, making it true all along.”
Thus—at a very young age—I discovered that my mind could, if asked, invent arguments for anything.
I know people whose sanity has been destroyed by this discovery. They conclude that Reason can be used to argue for anything. And so there is no point in arguing that God doesn’t exist, because you could just as well argue that God does exist. Nothing left but to believe whatever you want.
Having given up, they develop whole philosophies of self-inflation to make their despair seem Deeply Wise. If they catch you trying to use Reason, they will smile and pat you on the head and say, “Oh, someday you’ll discover that you can argue for anything.”
Perhaps even now, my readers are thinking, “Uh-oh, Eliezer can rationalize anything, that’s not a good sign.”
But you know… being mentally agile doesn’t always doom you to disaster. I mean, you might expect that it would. Yet sometimes practice turns out to be different from theory.
Rationalization came too easily to me. It was visibly just a game.
If I had been less imaginative and more easily stumped—if I had not found myself able to argue for any proposition no matter how bizarre—then perhaps I would have confused the activity with thinking.
But I could argue even for chocolate cake in the asteroid belt. It wasn’t even difficult; my mind coughed up the argument immediately. It was very clear that this was fake thinking and not real thinking. I never for a moment confused the game with real life. I didn’t start thinking there might really be a chocolate cake in the asteroid belt.
You might expect that any child with enough mental agility to come up with arguments for anything, would surely be doomed. But intelligence doesn’t always do so much damage as you might think. In this case, it just set me up, at a very early age, to distinguish “reasoning” from “rationalizing”. They felt different.
Perhaps I’m misremembering… but it seems to me that, even at that young age, I looked at my mind’s amazing clever argument for a time-traveling chocolate cake, and thought: I’ve got to avoid doing that.
(Though there are much more subtle cognitive implementations of rationalizing processes, than blatant, obvious, conscious search for favorable arguments. A wordless flinch away from an idea can undo you as surely as a deliberate search for arguments against it. Those subtler processes, I only began to notice years later.)
I picked up an intuitive sense that real thinking was that which could force you into an answer whether you liked it or not, and fake thinking was that which could argue for anything.
This was an incredibly valuable lesson -
(Though, like many principles that my young self obtained by reversing stupidity, it gave good advice on specific concrete problems; but went wildly astray when I tried to use it to make abstract deductions, e.g. about the nature of morality.)
- which was one of the major drivers behind my break with Judaism. The elaborate arguments and counterarguments of ancient rabbis, looked like the kind of fake thinking I did to argue that there was chocolate cake in the asteroid belt. Only the rabbis had forgotten it was a game, and were actually taking it seriously.
Believe me, I understand the Traditional argument behind Devil’s Advocacy. By arguing the opposing position, you increase your mental flexibility. You shake yourself out of your old shoes. You get a chance to gather evidence against your position, instead of arguing for it. You rotate things around, see them from a different viewpoint. Turnabout is fair play, so you turn about, to play fair.
Perhaps this is what Michael Rose was thinking, when he accused Richard Dawkins of “moral rigidity”.
I surely don’t mean to teach people to say: “Since I believe in fairies, I ought not to expect to find any good arguments against their existence, therefore I will not search because the mental effort has a low expected utility.” That comes under the heading of: If you want to shoot your foot off, it is never the least bit difficult to do so.
Maybe there are some stages of life, or some states of mind, in which you can be helped by trying to play Devil’s Advocate. Students who have genuinely never thought of trying to search for arguments on both sides of an issue, may be helped by the notion of “Devil’s Advocate”.
But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided. There is no expectation against having strong arguments on both sides of a policy debate; single actions have multiple consequences. If you can’t think of strong arguments against your most precious favored policies, or strong arguments for policies that you hate but which other people endorse, then indeed, you very likely have a problem that could be described as “failing to see the other points of view”.
You, dear reader, are probably a sophisticated enough reasoner that if you manage to get yourself stuck in an advanced rut, dutifully playing Devil’s Advocate won’t get you out of it. You’ll just subconsciously avoid any Devil’s arguments that make you genuinely nervous, and then congratulate yourself for doing your duty. People at this level need stronger medicine. (So far I’ve only covered medium-strength medicine.)
If you can bring yourself to a state of real doubt and genuine curiosity, there is no need for Devil’s Advocacy. You can investigate the contrary position because you think it might be really genuinely true, not because you are playing games with time-traveling chocolate cakes. If you cannot find this trace of true doubt within yourself, can merely playing Devil’s Advocate help you?
I have no trouble thinking of arguments for why the Singularity won’t happen for another 50 years. With some effort, I can make a case for why it might not happen in 100 years. I can also think of plausible-sounding scenarios in which the Singularity happens in two minutes, i.e., someone ran a covert research project and it is finishing right now. I can think of plausible arguments for 10-year, 20-year, 30-year, and 40-year timeframes.
This is not because I am good at playing Devil’s Advocate and coming up with clever arguments. It’s because I really don’t know. A true doubt exists in each case, and I can follow my doubt to find the source of a genuine argument. Or if you prefer: I really don’t know, because I can come up with all these plausible arguments.
On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one. I have trouble believing that the human brain, which just barely suffices to run a technological civilization that can build a computer, is also the theoretical upper limit of effective intelligence. I cannot argue effectively for that, because I do not believe it. Or if you prefer, I do not believe it, because I cannot argue effectively for it. If you want that idea argued, find someone who really believes it. Since a very young age, I’ve been endeavoring to get away from those modes of thought where you can argue for just anything.
In the state of mind and stage of life where you are trying to distinguish rationality from rationalization, and trying to tell the difference between weak arguments and strong arguments, Devil’s Advocate cannot lead you to unfake modes of reasoning. Its only power is that it may perhaps show you the fake modes which operate equally well on any side, and tell you when you are uncertain.
There is no chess grandmaster who can play only black, or only white; but in the battles of Reason, a soldier who fights with equal strength on any side has zero force.
That’s what Richard Dawkins understands that Michael Ruse doesn’t—that Reason is not a game.
Added: Brandon argues that Devil’s Advocacy is most importantly a social rather than individual process, which aspect I confess I wasn’t thinking about.
- Cached Selves by 22 Mar 2009 19:34 UTC; 214 points) (
- Criticism of some popular LW articles by 19 Jul 2020 1:16 UTC; 71 points) (
- LA-602 vs. RHIC Review by 19 Jun 2008 10:00 UTC; 62 points) (
- Fundamental Doubts by 12 Jul 2008 5:21 UTC; 38 points) (
- 6 Jul 2011 12:26 UTC; 28 points) 's comment on Find yourself a Worthy Opponent: a Chavruta by (
- Contaminated by Optimism by 6 Aug 2008 0:26 UTC; 22 points) (
- 24 Oct 2012 22:10 UTC; 20 points) 's comment on [Link] Offense 101 by (
- 1 Feb 2010 11:56 UTC; 10 points) 's comment on Open Thread: February 2010 by (
- 14 May 2014 7:54 UTC; 10 points) 's comment on What do rationalists think about the afterlife? by (
- 2 Feb 2010 6:43 UTC; 8 points) 's comment on Debunking komponisto on Amanda Knox (long) by (
- 16 Feb 2014 4:16 UTC; 7 points) 's comment on Open Thread for February 11 − 17 by (
- 19 Jul 2012 5:20 UTC; 6 points) 's comment on What are you counting? by (
- Steelmanning Young Earth Creationism by 17 Feb 2014 7:17 UTC; 6 points) (
- [SEQ RERUN] Against Devil’s Advocacy by 31 May 2012 1:11 UTC; 5 points) (
- 30 Mar 2010 15:34 UTC; 5 points) 's comment on Even if you have a nail, not all hammers are the same by (
- 12 Jun 2011 16:49 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 10 Oct 2013 16:07 UTC; 4 points) 's comment on A Voting Puzzle, Some Political Science, and a Nerd Failure Mode by (
- 1 Apr 2023 0:19 UTC; 4 points) 's comment on Correcting a misconception: consciousness does not need 90 billion neurons, at all by (
- 31 Aug 2011 9:10 UTC; 4 points) 's comment on Open Thread: August 2011 by (
- 26 May 2012 0:12 UTC; 4 points) 's comment on SotW: Avoid Motivated Cognition by (
- 17 Oct 2011 14:39 UTC; 2 points) 's comment on Open thread, October 2011 by (
- 18 Dec 2009 9:47 UTC; 2 points) 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (
- 1 Feb 2010 8:21 UTC; 2 points) 's comment on Logical Rudeness by (
- 5 Aug 2015 19:02 UTC; 1 point) 's comment on Ideological Turing Test Domains by (
- 24 Dec 2012 19:55 UTC; 0 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 8 Jul 2011 16:34 UTC; 0 points) 's comment on Dark Arts: Schopenhauer wrote The Book on How To Troll by (
- 6 Nov 2009 5:57 UTC; 0 points) 's comment on Open Thread: November 2009 by (
- 5 Feb 2010 0:22 UTC; 0 points) 's comment on “Put It To The Test” by (
- 27 Apr 2010 16:32 UTC; 0 points) 's comment on The Fundamental Question by (
- 4 Jun 2012 9:17 UTC; -5 points) 's comment on Raising safety-consciousness among AGI researchers by (
A much stronger argument for the chocolate cake would be that there must be some incredibly small probability that atoms would be come together by chance to form a chocolate cake in the asteroid belt. However, all physical possibilities are real, according to the argument for many worlds. Therefore there is actually a chocolate cake in the asteroid belt. It just happens to be very distant from our blob of amplitude.
A similar case: there must be a world where your arm transforms into a blue tentacle, even if this world has an incredibly small amount of amplitude. Granted that you don’t expect to see this happen, there is still a different version of Eliezer who does see it happen. Of course, as you have argued, he cannot explain it. But what do you think he says about it when people ask why it happened? Does he begin to believe in magic?
The only way to interpret the determiners in “there is a chocolate cake in the asteroid belt” that complies with Egan’s Law refers to ‘our’ asteroid belt in this blob of amplitude.
Configurations like that may have amplitudes so small that stray flows of amplitude from larger worlds dominate their neighboring configurations, preventing any computation from taking place.
Even if such worlds do ‘exist’, whether I believe in magic within them is unimportant, since they are so tiny; and also there is no reason to privilege that hypothesis as something to react to, since the real reason we are discussing that world is someone else choosing to single it out for discussion.
Since there is a good deal of literature indicating that our own world has a surprisingly tiny probabilty (ref: any introduction to the Anthropic Principle), I try not to dismiss the fate of such “fringe worlds” as completely unimportant.
army1987′s argument above seems very good though, I suggest you look at his comment very seriously
Asteroid belt got all the atoms to make the cake from, the only issue is their arrangement, and they’re presently arranged in a specific configuration that is as unlikely—as low amplitude—as if they were arranged into a bunch of cakes. It’s just that the highly unlikely configurations that look like asteroids are far more numerous than ones that look like cakes (which is a property of the looks-like-cake function).
Basically, it’s a common fallacy to believe that coin toss sequence such as HHHHHHH is less probable than HTTHHTH. It isn’t, and if you were to throw a quantum coin in a quantum many-worlds universe, the world where it was all heads will have same amplitude as every other sequence’s world.
(Also, any “stray flows of amplitude” require non-linear Schrödinger’s equation, of a very very specific kind so that you don’t end up with essentially one world)
This strikes me as a strong claim, your post sounds quite certain about mangled worlds, but as far as I’m aware, it hasn’t actually been verified. Yes?
It’s a branch refutation; strongly refuted if mangled worlds is true (hence ‘may’) but somewhat more weakly refuted if it’s not.
An aside: I was under the impression that this post is outdated by now, and the idea of Devil’s advocacy has been superseded by steelmanning, a term virtually not in existence until Luke popularized it in 2011.
“On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one. I have trouble believing that the human brain, which just barely suffices to run a technological civilization that can build a computer, is also the theoretical upper limit of effective intelligence.”
I don’t think visualization is a very good test of whether a proposition is true. I can visualize an asteroidal chocolate cake much more easily than an entire cake-free asteroid belt.
But what about other ways for your Singularity to be impossible?
I imagine two ideal debating agents each with a set of facts and tools with which to make logical connections between those facts. They start to debate over an issue and decide that they agree on whoch is the most plausible answer but also see a large number of flaws in the other side’s argument. As ideal debaters they don’t ‘overlook’ those flaws just because they imply the conclusion they want—they highlight them and ask for logical answers. In some cases one side will start to look like a devil’s advocate as a result of how debates form and the nature of the alternate scenarios.
With us non ideal agents I instead see people intentionally suppressing arguments that they know are good because they don’t further their side of the argument.
I’m inclined to think if you can’t argue for the other sides position you probably don’t fully understand it. If there position has 0 probability of being true that may be no issue at all, but there are not that many things I could ascribe that level of certainty to.
An alternate way to criticize random devil’s advocacy of the chocolate cake in the asteroid belt variety is to point out that we seem to be in a disaster movie, not a Seinfeldian sitcom. It might be a good idea for us to try to pick and choose wisely where we focus our time and energy. (A question is whether the movie is of the Hollywood or of the European/French variety. Here’s hoping for a Hollywood ending.)
if you manage to get yourself stuck in an advanced rut, dutifully playing Devil’s Advocate won’t get you out of it.
It’s not a binary either/or proposition, but a spectrum; you can be in a sufficiently shallow rut that a mechanical rule of “when reasoning, search for evidence against the proposition you’re currently leaning towards” might rescue you in a situation where you would otherwise fail to come to the correct conclusion. That said, yes, it would indeed be preferable to conduct the search because you actually have “true doubt” and lack overconfidence, rather than by rote, and rather than for the odd reasons that Michael Rose gives.
Dad was an avid skeptic and Martin Gardner / James Randi fan, as well as being an Orthodox Jew. Let that be a lesson on the anti-healing power of compartmentalization
Why do you think that, if he had not compartmentalized, he would have rejected Orthodox Judaism, rather than rejecting skepticism?
In high school I was on a debating team, and I can remember eventually forming the view that it was a potentially corrupting exercise, because you had to argue for the position you were given, not the position that you believed or the position that you might rationally favor. Occasionally the format permitted creative responses; I recall that once, the affirmative team had to argue ‘That Australia has failed the Aborigine’, and we on the negative team decided to outflank rather than straightforwardly oppose; we said that wasn’t true because what Australia had done was much much worse than that. But even that was basically an exercise in lawyerly ingenuity, resulting from a desire to win rather than from a desire to arrive at the truth.
I have always found the debates that we had to do in school difficult and painful, mostly because of having to argue for points of view that I didn’t believe. The problem wasn’t, however, that I believed strongly in one side but had to argue the other–it was usually that I didn’t strongly believe in either side, found the arguments for both sides reasonable, and found it hard not to play Devil’s Advocate with myself during a debate (warning: this will annoy the rest of your debating team!)
It’s not that I don’t have beliefs or opinions–it’s just that they tend not to be black-and-white, and I’m constantly questioning myself, i.e. “No, I don’t think God exists, but I do think the question of what humans experience when they claim to experience God’s presence is really interesting and should be studied more, and I think faith-based institutions usually do more good than harm, and I can empathize with the emotional state of someone who believes in God, so if they say their belief makes them stronger, who am I to question that–I think it’s a fact about the world that God doesn’t exist, and a fact about my brain that believing true things makes me stronger, but I know people who’ve experienced a lot of emotional trauma and they might be right that, in the short term, faith does make them stronger in the sense of being able to cope better with the randomness of day-to-day life...” I’m like that even more on questions where I don’t think I’m educated enough to have any kind of opinion. Imagine how that would go over in a standard debate.
Then again, the usual debate question is something like “does violence in video games make children more violent?” That may be an empirical question, but at least back in high school when I had to debate on it, it hadn’t been researched enough for someone to argue either side based on the evidence. Also, the answer you get when you study it probably depends on how you define your terms, since “children”, “violence”, and “video games” are all imbedded parts of a massively complex system with many, many inputs and outputs much more complicated than just a sliding scale of violent tendencies. To me, the ability to come up with arguments for why one side is true won’t actually help you do anything more intelligently in your life.
The first example you give doesn’t sound to me like exploring multiple sides of the question “Does God exist?” so much as exploring multiple questions I might ask instead: is what humans experience when they claim to experience God’s presence interesting? Should it be studied more? Do faith-based institutions do more good than harm? Can I empathize with a believer? Can belief in God make one stronger, and if so under what circumstances? Etc.
You’re right, they aren’t the same question–but that’s what my brain brought up when queried with “what would you say if asked to debate the existence of God?” Somehow just saying that “no, I think that God doesn’t exist for reasons X, Y, Z” doesn’t seem to be enough. I think this may be because of the “arguments as soldiers” approach–if I tell someone I’m an atheist, but don’t go on to clarify my beliefs on all those other questions, the assumption tends to be that I must think theists are stupid, stupid people.
I think it also might be a strategy I use to increase the feeling of “being on the same side” when talking to people who I know are theists, since not clarifying might lead them to believe that I’m, in some sense, their intellectual enemy.
Oh, absolutely. Answering a different question than the one I’m asked is often a useful rhetorical technique, for lots of reasons, including the ones you list.
What about the possibility that the answer to “Is playing Devil’s Advocate a good strategy?” is context-dependent?
There could be a set of psychological answers (possibly different for different kind of personalities) to the question applied to individual behavior—as in “would it help a type X person (on average) to overcome its biases and/or to think more creatively?”
And there could be a set of sociological answers (possibly different for different types or organizations, like schools, firms, research centers, etc.) to the question applied to organizational behavior—as in “would it help (on average) to always/sometimes have a “red team” that tries to shoot down the main team’s theories?”
On the latter point, I have heard that the EU’s Directorate General for Competition policy was quite happy with the introduction of a “Devil’s Advocate” policy a few years ago. Assuming the policy was indeed successful, it may be however that the success was due to the peculiar nature of the institution (i.e., the fact that they are participant in a judicial process and that the “red team” simply prepares them better for the questioning they might face from the real “enemy advocates”).
Any empirical research on these issues that you know of?
Devil’s Advocacy explained: “While many undergraduates prefer teachers to give them the one true answer on policy questions, I try to avoid taking sides. In class discussions, I ask students to take and defend policy positions, and then I defend any undefended major positions, and mention any unmentioned major criticisms of these positions. The important thing is for students to understand the sorts of arguments that an economist would respect. Hanson :-)
mitchell porter: the crazy thing is, that sort of “clever liar, dumb audience” approach can still be a temptation when arguing with yourself.
I agree with the abstract point of the post, but I think it’s unrelated to what Rose is talking about. As I interpreted the quote, Rose isn’t concerned with how internally playing Devil’s Advocate influences individual beliefs; he’s interested in whether publicly playing Devil’s Advocate is a good thing for the public understanding of the truth. The view he attributes to Dawkins is that publicly playing Devil’s Advocate is always bad, because members of the Other Side who are prone to rationalization will readily (though wrongly) interpret it as definitive support for their position. (This is a defensible position even if you have real doubt about your own side, as you can still believe that the evidence points more to your side than any of the others—and therefore oppose actions that will lead people to think otherwise.)
I think you and Rose may be defining “Devil’s Advocate” differently. I’ve usually heard the phrase used to preface the presentation of evidence that goes against a group’s consensus. If a discussion group has just decided, after considering the issue, that there is probably no cake in the asteroid belt, someone might mention some additional pro-cake arguments with the disclaimer “I’m just playing the Devil’s Advocate...” This doesn’t mean that they’re engaging in some special, cognitively different task where they actively try to take the perspective of a cake-believer, or come up with arguments in a “rationalizing” manner. It may just be reasoned consideration—the exact same sort of thinking that led them to their no-cake conclusion—that happens to support the cake idea, and the phrase “Devil’s Advocate” just reminds the group that the speaker still remembers the anti-cake thrust of the overall evidence. In other words, once people come to a conclusion (particularly one that is emotionally or politically sensitive), they seem to file all evidence for the other side—as in true, good evidence—under the label of “Devil’s Advocate.” I get the feeling that this is what Rose meant. He wasn’t rationalizing, he was just publicly presenting evidence that supports a conclusion he didn’t believe.
Odd that, in a paean to Reason, the deciding factor in how a line of argument is evaluated is how it makes someone feel.
Is it not possible that looking for support for claims we believe to be absurd feels different than looking for support for claims we take very seriously?
In nearly all Classical schools of logic & rhetoric—Sanskrit, Talmudic, Buddhist, Socratic, Islamic, and Christian as shown by St. Thomas Aquinas—an educated person is expected to know all the major arguments for the important sides of key questions.
In fact to obtain a “doctorate,” I have often heard that Tibetan Buddhists used to require a student monk to perform an all-day public argumentation of several sides of the same question, refute all the arguments, and offer a new, original synthesis. I believe the Indian Math followed the same procedure for millenia, and Socrates got himself killed for embarrassing public figures by humiliating them this way in the agora.
The ability to argue your opponents’ position better than they can and then handily defeating it in summary has long been considered the mastery of rhetoric. Thus playing the Devil’s Advocate is a valuable intellectual tool for the aspiring orator or public intellectual. Rose is right; Dawkins, wrong.
The ability to play devil’s advocate may indeed make you a better orator, and thus better at “public” intellectualism. But I don’t think this is the point of the article. The point is that the human brain is a messy, biased thing that can convince itself of almost anything, and if you want to be reacting to reality itself, instead of to your own wishful thinking, you don’t want to encourage habits of thinking that would make you better at convincing yourself of untruths. And given the messiness of human reasoning (priming, etc), even if a public intellectual resolved to base his own beliefs off one process while playing the devil’s advocate in public, he is likely to contaminate his personal beliefs in the process.
EY: Has it not occurred to you that maybe, what you took to be extreme aptitude at rationalization is in fact a foil that you set up so that your actual rationalization process could continue? Because I’m pretty sure that it has. Continued, that is.
In fact, to continue my general thread of questioning the existence of intelligence, I’m not sure that there is any mode of human reasoning besides 1) rationalization and 2) deduction according to rules in a formal system. Probably, what we call rationalization is also the same process that tells you that if you find a nibbled wheel of cheese, there might be a mouse in your kitchen. It’s just a fill-in-the-blanks madlibs-like game, that happens to give the right answer a very respectable fraction of the time.
Which is to say that, if you’re not using any kind of formal mathematical argument, you’re rationalizing. And come to think of it, this meshes nicely with the earlier example of Einstein finding relativity by pure thought. It’s true that he didn’t conduct any physical experiments, but we might say that he conducted mathematical experiments to find a self-consistent mathematical system to represent his idea. If he hadn’t found such a system, he would presumably have discarded his idea, and so in effect, what he was doing was not pure rationalization.
Frelkins, the aspiring orator or public intellectual is someone who wants to impress people; he is engaging in a power game or vanity game etc.
A truth-seeker does not want to impress people, he or she or ve wants to know. Reason, as Eli said, is not a game.
“That’s what Richard Dawkins understands that Michael Rose doesn’t—that Reason is not a game.”
Dawkins is also acutely aware that his opponents won’t always play fair, and have often quoted him and other scientists out of context to try to make it seem like they hold position that they don’t actually hold. That’s why he wants to have a tape recorder when he dies, so there can’t be rumors about his “deathbed conversion”.
@Guenther Griendl
Obviously you have never read, for example, Cicero. Truth really mattered to many ancient public figures and to the most prominent ancient philosophers.
I cannot think of anyone whose love for the truth exceeded Socrates. Because he understood that truth has important public ramifications, not merely private, he basically martyred himself. Prominent figures must not be allowed to lie, or to have their faulty beliefs go unchallenged.
Public lies put people’s lives at risk, in case you’ve noticed. Thucydides gives several sad examples.
“A truth-seeker does not want to impress people, he or she or ve wants to know.”
What is the point of being a “truth-seeker”?
@Frelkins,
well, actually I did read Cicero in school, and I like Socrates’ attitude; but I don’t quite see in what way you are responding to my post?
I just wanted to clarify that the skill of oratory may be a valuable asset for people, but being a good orator does not make you a good truth-seeker.
This is very dangerous. I think a great example of its danger is Colin McGinn (popularizer of mysterianism) in his The Making of a Philosopher. He says that what attracted him to philosophy was the ability to reason ones way to contrarian opinions. Being forced to an answer itself has an appeal. This is a major problem in the transhumanist and libertarian communities, for example, where bullet biting is much more highly regarded than having your facts straight.
There is a story of Carneades of Cyrene, a post-classical philospher, who came to Rome as an ambassador of Athens. As a member of the New Academy, Carneades was well versed in sceptical argumentation, and stood steadfastly against dogma of any kind. Once in Rome he proceeded to deliver a spellbiding address, arguing that justice should top a list of human motives. The following he day, in service of his real argument concerning the uncertainly of human knowledge (deep scepticism), he proceeded to contradict the argument given the previous day, arguing instead that justice should rank much lower on the scale of human motives. Both declamations were more or less equally compelling. Cato the elder sent Carneades packing, presumably out of Cato’s concern that scepticism widely practised would undermine the Roman military culture and muddle popular thinking.
But somewhere I believe Dawkins is reported as having quipped that it is no bad thing to have an open mind, so long as it is not so open that one’s brains spill out. Doesn’t rationality require one to respect one’s evidence, which one cannot expect to do if one does not know what it is? Therefore, doesn’t one have to posit knowledge as unanalysable, in contrast to belief which may be true or false?
“But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided.” I think you have to qualify this statement with “unresolved” policy debates.
I’ll take the positions: 1) another Holocaust would be a bad thing. 2) global warming is real and S.Harper and GWB are real existential risk maximizing actors. 3) the US prison economy (construction, staffing and forced prison labour), now consuming more resources than Universities in your retarded country, is a conflict of interest. It won’t help students at all to adapt the opposite positions.
The problem with taking evil positions “just for kicks”, is that many of these positions are adapted in real life. There are a powerful (low teens percentile) political minorities in Europe and Russia that wouldn’t mind another Holocaust and would welcome more skeptical minds like EY to briefly adopt their positions. Same for oil supporters in Canada and the USA that presently run the world and are actively seek humanity’s destruction. The USA incarcerates a greater % of its population than anyone; is practically a 3rd world country. Slavery is still alive in the USA.
“unresolved” turns the above brain sharpening positions into acceptable (but still false policy positions): 1) Immigration should be reduced or union jobs should be subsidized with public funds or cultural minorities should melting pot. 2) I’m greedy and would rather consume than stabilize Earth for future generations. 3) We need retarded Republican policies to try to maintain global military hedgemony, and the Republican alliance shouldn’t be fractured; also, incarcerating Democrats prevents them from voting.
Don’t encourage malleable students to adopt evil positions, they may like it.
For the record, Martin Gardner believes in God (he calls himself a “fideist”)
What is the point of being a “truth-seeker”?
If you are not an avid truth-seeker, then please do not do scientific research or develop new technologies and please do not become a cultural leader or innovator and please do not participate in serious public discourse about law, economics, politics or education because it takes a keener ability to distinguish truth from falsehood than most people have just to avoid unknowingly doing evil in those pursuits.
On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one.
There are many different dimensions of “strength” that we could use to distinguish minds. Given how important the concept of “intelligence” is to you I’d think it would be profitable for you to break this concept into components and phrase your discussions more in terms of those components.
It’s Michael Ruse, not Rose.
Don’t you mean Michael Ruse?
Knowledge is generally quantum in nature: there are many possible futures. Thus, Alice might believe Schrodinger’s cat is alive, and Bob might believe it is dead. Such genuine disagreement can occur even in a purely deterministic Newtonian world where people have distinct bits of imperfect knowledge—people can genuinely believe false things and have no way of knowing that they are false. Indeed, in many ways this is the normal state of knowledge, since by the pigeonhole principle no individual can host more than a tiny fraction of the world’s data in his brain. Another reason for genuine disagreement is that people believe things that are true given the way they use words, but an opponent believes it is false given the way the opponent uses words. This kind of disagreement is one over description rather than over external reality.
Whether the disagreement is over semantics or external reality, both sides can have arguments in their favor, and arguments against, and it is often highly non-obvious how to reconcile the contraditions. People arrive closer to the truth by discussions in which it is understood that either side may be wrong.
Legal disputes follow the same “quantum” logic. We don’t want to have cops go around shooting people just because they strongly believe they are guilty. Rather we go through a process that assumes that the party might be innocent or guilty—much like a quantum state. Evidence is gathered and each side is allowed to put forward arguments in its favor, hopefully, the evidence causes ignorance to collapse, and a verdict can be more confidently reached.
In this world of uncertainty your opponent’s argument can be as important as your own. An effective seeker after truth, as well as the effective advocate, understands counter-arguments, in order to discover the holes either in that argument or in one’s own.
Michael Rose is thus quite right about this. If Richard Dawkins wants to debate creation or religion, he should be happy to exercise with “devil’s advocate” arguments of his opponents. If he doesn’t want to do that, he won’t be an effective advocate and he’s wasting his time. A dogmatic profession that “God does not exist” is almost competely uninformative—we already know that millions of people are atheists.
(In fact, one of the beauties of Blind Watchmaker is that Dawkins understood and responded to creationist arguments, e.g. the watchmaker anaology, far better than other advocates of evolution, who usually neglected to explain the highly improbable design-like products of evolution. In the process he highlighted a crucial aspect of evolution, adaptation, that many others writing about evolution misleadingly downplayed, allowing creationists arguments based on accurate observations of the design-like nature of organisms to go unrebutted. Dawkins had to play plenty of devil’s advocacy at least in his own mind to achieve this understanding of creationist arguments, and in the process we also learned more about evolution. It would be a shame if he’s lost this skill and taken to just dogmatically asserting his beliefs).
Strong feelings and personal confidence often have little correlation to actual truth. Even if you believe strongly that you are right, or perhaps especially if you have strong beliefs, if you are going to engage in debate you should understand the “devil’s advocate” arguments of your opponents. If you think it’s a waste of time to understand arguments in favor of religion, asteroidal chocolate, or anything else that you find credible or incredible, you should not be surprised that (a) you’ll be very bad at convincing people who don’t share your beliefs to share them, and (b) if you do turn out to be wrong, you won’t understand why you are wrong. It’s fair to say that for the chocolate cake, since nobody believes it, there is nobody that needs convincing, and it’s also fair to not be concerned about the risk that one’s opinion about it is wrong. It’s also reasonable to ignore religion and creationism for the second reason. But if you want to convince a religious person to be an atheist, or a creationist to be an evolutionist, you’ll be far better off understanding their arguments first, and you might well arrive at more accurate forms of atheism and evolution in the process.
I second rf and Michael G.R.
I can’t tell what concrete behavior Eliezer is advising us against. Per rf, it doesn’t seem to match up with the common usage of “devil’s advocacy”. Is “don’t argue as if you genuinely believed other than you do” a good summary?
Tangential argument: existential risk maximizing actors, thank goodness, don’t exist, nor do more than a tiny number of people seeking to destroy humanity. Beware the Angry Death Spiral.
The cut and thrust of intellectual debate is pragmatically a good way to stimulate one’s mind on a subject. Unfortunately, you can only practice it if you disagree with someone of comparable intellect on the subject you want to think about. So what are you supposed to do when the only people of comparable intellect to you that you have handy agree with you, and naturally will not debate you? You take on the mantle of Devil’s Advocate, and create an artificial debate.
Given available data, there’s only one valid set of conclusions we can reach. Being a ‘devil’s advocate’ is worthwhile if you need to make sure you’re accurately considering all of the points against a thesis as well as for, but if you’re reasoning correctly there’s no point to taking an opposing position for argument’s sake.
In order to take a position other than the one supported by the data, we need to ignore the points that make that position invalid. That’s a dangerous thing to do, when human minds have a natural tendency to ignore contradictory information. It’s like pointing a gun at someone even though you believe it’s unloaded—it’s extremely irresponsible even if it happens to be true.
In order to take a position other than the one supported by the data, we need to ignore the points that make that position invalid.
Perhaps, but devil’s advocacy doesn’t require taking a position other than the one supported by the data; the phrase “devil’s advocate”, in fact, suggests that you don’t. (Otherwise, you’d simply be an advocate.) The devil’s advocate proposes objections for which a position should have answers; this is different from accepting the objections, and certainly different from ignoring the reasons for accepting the position being examined.
Funny enough, I just read in my RSS feed James Q. Wilson defending the amount of incarceration in the U.S. http://volokh.com/posts/1212699333.shtml http://volokh.com/posts/1213047646.shtml
I’m with Hanson, favoring Devil’s Advocacy.
“Tangential argument: existential risk maximizing actors, thank goodness, don’t exist, nor do more than a tiny number of people seeking to destroy humanity. Beware the Angry Death Spiral.”
I think I’ll stand by my words and qualify the statement maybe GWB could start WWIII single-handedly and isn’t, so this is only pertaining to the threat global warming. S.Harper couldn’t be misplaying the threat worse. Canada’s governing structure has a provision where the Queen of England is the real head of state, and the Governer General would almost certainly remove our PM from power if he did things like igniting Canada’s coal reserves (a nice trick to have in the arsenal though, if the world is heading towards an Ice Age, as Ice Ages typically onset in decades or less). We are very early in on Global Warming. If GWB and S.Harper were acting as they are now, a decade or two from now, my correct position would be the mainstream. I didn’t mean actively as in willfully, like Nazi evil. I meant it more like allowing a population to starve (literally in this context), like Soviet evil inflicted on the Ukraine. GWB and S.Harper know full well what they are doing is greedy and they both know enough or are purposely (as opposed to unintentionally) avoiding the knowledge. Yep, I stand by my statement. When history looks back, if we make it there, GWB and S.Harper will be looked back as being one of the world leaders of their respective nations ever, solely on the demerits of their handling of Global Warming. B.Obama and S.Dion, solely by coming after them with an environmental platform that doesn’t threaten to destroy humanity for short-term profit, will go down in history as at least above average leaders.
Am I part of an angry death spiral? I think my comments are measured. Probably even kind. S.Harper’s first act of government was to cancel 17 Canadian Global Warming research programmes, including a critical ocean one. Do I really need to post what dubya has done on this file? The angry death spiral only happens if Republicans and Conservatives maintain power over the years ahead. America finally cashes in its WWII credit and Canada temporarily loses post-modern status. Not a spiral. Yet.
About Devil’s Advocacy, it is fine as long as it is stated. Don’t go claiming the Holocaust was a good thing and should be completed this time around, without mentioning the part about just wanting to heighten the quality of debating skills.
TGGP, if present rates of US prison incarceration existed historically, the USA would never have been a superpower. $100000 a person annually, at 3 million people. You do the math. The worst part is they are all black and poor. They are being imprisoned because: 1) they can’t afford lawyers, 2) they are black, 3) only thirdly, they are guilty. Lets cut the crap, most people reading this blog have committed offenses that could see them imprisoned for years. Drugs and pills without a prescription are obvious, but there are many more. If you are black or poor, you are far likely to be cuaght and imprisoned. There is a prison quota to pay administrative prison salaries and to ensure work for construction contractors. Prisoners are forced to labour at rates that are used to undercut SE Asian labour force bids. Try to say no to the work and not get raped by other prisoners. Prisoners in maximum security US institutions learn criminal skills (good in learning how to grow weed, bad most other skills), are emotionally hardened, and are subject (along with staff) to an environment that encourages mental health problems. I wasn’t kidding when I said the USA now spends more on prisons than on higher education. Only 3rd world countries do this. The contractors could literally be building public schools and the staff could be trained in tangential security and health occupations (in Canada, 20% of nursing home residents are subject to assault by mentally ill residents, something that could be avoided with a former prison guard on duty). More importantly, in my eyes, imprisoning so many innocent people somewhat condones all other crimes. The above excludes the health care cost of increased Hepatitas, drug usage, mental health problems, HIV, staff stress.… I don’t know who James Q Wilson is, but I am 100% certain he is not African American or a poor person in the Western world. If he were, he would reflexively adopt a correct policy position. He likely earns 6 figures and is white. The police have never pulled him over and searched his pockets for pills or given him a breathalyzer after a car accident.
Am I part of an angry death spiral? I think my comments are measured.
I think they are immoderate. Also, your last two comments are almost completely off-topic.
″...Also, your last two comments are almost completely off-topic.”
I was just playing the Devil’s Advocate, screwing around to “help” others build debating skills while not telling them I was wasting their time :)
Richard Dawkins claims he doesn’t believe in Devil’s Advocacy? He is just such a typical Aries!
Is devil’s advocacy just another get-out-of-argument-free card, like “Well, that’s just my opinion” and “I guess we’ll have to agree to disagree”? That is, something you say when you’ve “lost” the argument, or are about to lose, and want to withdraw without conceding social status (from the debate-as-game perspective) or altering your opinion (from the debate-as-truth-seeking perspective). That certainly looks like Phillip’s use of it above, although he’s obviously joking.
Even if you start out by saying “I’m just playing devil’s advocate”, as Phillip suggests in the beginning, you could in fact be counting on the fact that if your argument dominates, you can gradually play down your devil’s advocacy and let it be known that you’ve managed to persuade yourself as well.
So I remain suspicious of declarations of devil’s advocacy. People who deploy get-out-of-jail-free cards in arguments are the same kids who refused to lie down when they were shot in a playground game of Cowboys and Indians, always shouting “I’ve got bulletproof armour!” or some such. A game where one or more players can win or withdraw but never lose is clearly broken.
I recently stumbled on this realization, when I was talking with friends about the myriad of problems, inconsistencies, and annoyances of the Star Wars universe. After the umteenth conflict was brought up, agreed upon, then given a plausible-sounding explanation, I was a little bit shocked at how amazingly easy Lucas’ Advocacy came to me. It wasn’t a big extrapolation to see that Devil’s Advocacy was nearly as easy.
I think I’ve come to enjoy role playing games less since stumbling upon Less Wrong. Aside from being a communal activity, I find diminished value in obsessing over a map that is only as detailed as the players and DM construct. Maybe if the other players were as interested in models of reality as me it would be different. But in a game where you just Use The Force to fix all problems, clever tricks are largely unrewarded and thus unsatisfying.
Far more interesting to try and figure out this Bayes stuff. For instance, I saw the results of a study in a newscientist article. The article claimed that of 108 women, 75 claimed to have intuitive knowledge of their baby’s sex, and of them, 60% were correct. I was trying to figure out the probability of a woman being intuitive, given that she was correct. It was a failure, since they did not say how correct the women who did not have intuitions were. But it was fun, and deeply satisfying to apply knowledge to a problem. I just realized the article linked to some press release, which seems like a bunch of woowoo. Now I really want to know how accurate the women who didn’t profess magical intuition dream powers were.
I find roleplaying even more satisfying after finding LW! It’s all in how you play though, lately my group and I have gone more and more freeform, bored with all the inconsistent and arbitrary rules much like what you are expressing, but don’t let the rules stop you! if you want a more complex game than the system allows for, drop the system, not the game!
This is seriously of-topic though...
Boy howdy would I like to join your group! I’ve got one game that’s rather similar, and I still love playing that one. Luckily, all the others have run out of steam, so I don’t have to pretend to be satisfied with arbitrary judgments for social reasons. But you’re right, this is pretty off-topic.
In a few sentences, can you explain why you would like to know? What would you know if accuracy was low/high/exactly 50%?
Foremost is curiosity. I’ve only really known anything about Bayes Theorem for a month or so, and equipped with this hammer I’ve been looking for nails.
I was also interested in trying to determine just how remarkable this insight power might be and thought that figuring out the posterior probability would give me some insight there. From what I can tell, the number of women who claim intuition so outnumber those who don’t that even if the accuracy in question were 100%, there’d still be a ~58% chance that a woman who has guessed correctly claims special knowledge. So I guess at best I’d know that most people who guess the right answer believe they have special powers.
Strange, it seems my desire to know their accuracy has little to do with my desire to figure out the posterior probability.
I sure should’ve read the sequences, if anything, for this post. It is very explanatory in a way.
Yea, right, and if you study complexity theory, you know that the performance on some tasks will at most double when the computing power is squared, and you’ll see that you don’t quite know if ‘mind’ is such a task and if that ‘strength’ is logarithmic. Yes it’s hard to visualize something seriously non-linear like a logarithm or exponent. That’s because correct thought can be hard for human brain to process. And the invalid thought can be easier.
If you go by the feel of how easy it is to visualize something, that is example of a process that could force you into an answer whenever you liked it or not—but this is just another kind of fake thinking.
In the way I view Devil’s advocacy it is not at all about coming up with any argument against a proposition, but coming up with a legitimate one against a belief. “What if a time traveler threw a cake into the asteroid belt?” is not an argument anyone would use in a legitimate debate and likewise is one I would avoid if I was attempting to argue against my own beliefs. Arguing merely for the sake of arguing is indeed useless and irrational, but arguing to try to expose your belief’s weak points is rather extremely helpful.
Worse still, it will be someone who didn’t come up with the idea on their own but rather read this OB/LW post and decided to do that.
Still not worried.
This is a lovely example, which sounds quite delicious. It reminds me strongly of the famous example of Russell’s Teapot (from his 1952 essay “Is There a God?”). Are you familiar with his writing?
Yes, I have noticed that many of my favorite people, myself included, do seem to spend a lot of time on self-congratulation that they could be spending on reasoning or other pursuits. I wonder if you know anyone who is immune to this foible :)