Losing Faith In Contrarianism
Crosspost from my blog.
If you spend a lot of time in the blogosphere, you’ll find a great deal of people expressing contrarian views. If you hang out in the circles that I do, you’ll probably have heard of Yudkowsky say that dieting doesn’t really work, Guzey say that sleep is overrated, Hanson argue that medicine doesn’t improve health, various people argue for the lab leak, others argue for hereditarianism, Caplan argue that mental illness is mostly just aberrant preferences and education doesn’t work, and various other people expressing contrarian views. Often, very smart people—like Robin Hanson—will write long posts defending these views, other people will have criticisms, and it will all be such a tangled mess that you don’t really know what to think about them.
For a while, I took a lot of these contrarian views pretty seriously. If I’d had to bet 6-months ago, I’d have bet on the lab leak, at maybe 2 to 1 odds. I’d have had significant credence in Hanson’s view that healthcare doesn’t improve health until pretty recently, when Scott released his post explaining why it is wrong.
Over time, though, I’ve become much less sympathetic to these contrarian views. It’s become increasingly obvious that the things that make them catch on are unrelated to their truth. People like being provocative and tearing down sacred cows—as a result, when a smart articulate person comes along defending some contrarian view—perhaps one claiming that something we think is valuable is really worthless—the view spreads like wildfire, even if it’s pretty implausible.
Sam Atis has an article titled The Case Against Public Intellectuals. He starts it by noting a surprising fact: lots of his friends think education has no benefits. This isn’t because they’ve done a thorough investigation of the literature—it’s because they’ve read Bryan Caplan’s book arguing for that thesis. Atis notes that there’s a literature review finding that education has significant benefits, yet it’s written by boring academics, so no one has read it. Everyone wants to read the contrarians who criticize education—no one wants to read the boring lit reviews that say what we believed about education all along is right.
Sam is right, yet I think he understates the problem. There are various topics where arguing for one side of them is inherently interesting, yet arguing for the other side is boring. There are a lot of people who read Austian economics blogs, yet no one reads (or writes) anti-Austrian economics blogs. That’s because there are a lot of fans of Austrians economics—people who are willing to read blogs on the subject—but almost no one who is really invested in Austrian economics being wrong. So as a result, in general, the structural incentives of the blogosphere favor being a contrarian.
Thus, you should expect the sense of the debate you get, unless you peruse the academic literature in depth surrounding some topic, to be wildly skewed towards contrarian views. And I think this is exactly what we observe.
I’ve seen the contrarians be wrong over and over again—and this is what really made me lose faith in them. Whenever I looked more into a topic, whenever I got to the bottom of the full debate, it always seemed like the contrarian case fell apart.
It’s easy for contrarians to portray their opponents as the kind of milquetoast bureaucrats who aren’t very smart and follow the consensus just because it is the consensus. If Bryan Caplan has a disagreement with a random administrator, I trust that Bryan Caplan’s probably right, because he’s smarter and cares more about ideas.
But what I’ve come to realize is that the mainstream view that’s supported by most of the academics tends to be supported by some really smart people. Caplan’s view isn’t just opposed by the bureaucrats and teachers—it’s opposed by the type of obsessive autist who does a lit review on the effect of education. And while I’ll bet in favor of Caplan against Campus administrators, I would never make a mistake like betting against the obsessive high-IQ autists.
Sam Atis—a super forecaster—had a piece arguing against The Case Against Education, but it got eaten by a substack glitch. Reading his piece left me pretty sure that Bryan was wrong—especially after consulting a friend who knows quite a bit about these things. After reading it, I came away pretty confident that Caplan was wrong.
This is very far from the only case; I’ve watched the contrarian’s cases fall apart over and over again. Reading Alexey Guzey’s theses on sleep left me undecided—but then Natania’s counter-theses on sleep left me quite confident that Guzey is wrong. Guzey’s case turns out to be shockingly weak and opposed by a quite major mountain of evidence.
Similarly, now that I’ve read through Scott’s response to Hanson on medicine, I’d bet at upwards of 9 to 1 odds that Hanson is wrong about it. There’s an abundance of evidence that medicine has dramatically improved health outcomes, from well-done randomized trials to the fact that people are surviving more from almost all diseases. Hanson’s studies don’t even really support what he says when examined closely.
Similarly, the lab leak theory—one of the more widely accepted and plausible contrarian views—also doesn’t survive careful scrutiny. It’s easy to think it’s probably right when your perception is that the disagreement is between people like Saar Wilf and government bureaucrats like Fauci. But when you realize that some of the anti-lab leak people are obsessive autists who have studied the topic a truly mind-boggling amount, and don’t have any social or financial stake in the outcome, it’s hard to be confident that they’re wrong.
I read through the lab-leak debates in some depth, reading Scott’s blog, Rootclaim’s response, Scott’s response, and various other pieces. And my conclusion was that the lab-leak view was far, far less plausible than the zoonosis view. The lab leak view has no good explanation of why all the early cases were at the wet market and why the heat map clearly shows the wet market as the place where the pandemic started.
The contrarian’s enemy is not only random conformists. It’s also ridiculously smart people who have studied the topic in incredible depth and concluded that they’re wrong. And as we all know from certain creative offshoots of rock, paper, scissors, high-IQ mega autists beats public intellectual.
I read through the Caplan v Alexander debate about mental illness. And I concluded that Caplan wasn’t just wrong, he was clearly and egregiously wrong (I even wrote an article about it). This is not to beat up on Caplan—I generally think he’s one of the better contrarians. But the consensus view often turns out to be right on these things.
Similarly, there are a lot of people like Steve Sailer and Emil Kierkegaard arguing that there are racial gaps in intelligence, based on genetics. But when I read them on other stuff, they’re just not great thinkers. In contrast, while Jay M’s blog isn’t as popular or as fun to read for most people, he has a good piece arguing pretty convincingly against the genetic explanation of the gap. The author isn’t a conformist—his other articles express various controversial views about race. Yet he did a thorough deep dive into the literature and concluded that the environmental explanation is most plausible. I’ve also chatted with him and he’s very smart and good at thinking, unlike, I think, Kirkegaard and Sailer (I could be wrong about that—I don’t know them that well). I don’t have the statistical acumen to really evaluate the debate, but I do get the same sense—that while popular contrarians with widely read blogs say one thing, the balance of evidence doesn’t support that view.
Many more people read Kirkegaard and Sailer because expressing the conformist view on the topic is much less interesting than expressing the contrarian view. Most of the people who believe the gap is environmental don’t much want to argue about it, so almost all the people who write things about it are people who believe the genetic explanation of the gap. Very few people want to read articles saying “here are 10,000 words showing that the view you reject by calling it racist pseudoscience is actually conflicted by the majority of the evidence.”
I could run through more examples but the point should be clear. Whenever I look more into contrarian theories, my credence in them drops dramatically and the case for them falls apart completely. They spread extremely rapidly as long as they have even a few smart, articulate proponents who are willing to write things in support of them. The obsessive autists who have spent 10,000 hours researching the topic and writing boring articles in support of the mainstream position are left ignored.
It strikes me that there’s a rather strong selection effect going on here. If someone has a contrarian position, and they happen to be both articulate and correct, they will convince others and the position will become less surprising over time.
The view that psychology and sociology research has major systematic issues at a level where you should just ignore most low-powered studies is no longer considered a contrarian view.
I guess in the average case, the contrarian’s conclusion is wrong, but it is also a reminder that the mainstream case is not communicated clearly, and often exaggerated or supported by invalid arguments. For example:
it’s not that “dieting doesn’t work”, but that people naively assume that dieting is simple and effective (“if you just stop eating chocolate and start exercising for one hour every day, you will certainly lose weight”, haha nope), even when the actual weight-loss research shows otherwise;
it’s not that “medicine doesn’t improve health”, but while some parts of medicine are very useful, other parts may be neutral or even harmful, and we often see that throwing more money at medicine does not actually improve the outcomes;
it’s not that “education doesn’t work”, but if you filter your students by intelligence and hard work, of course they will have better outcomes in life regardless of how good is your teaching, so the impact of education is probably vastly overestimated, and this also explains why so many pedagogical experiments succeed at a pilot project (when you try them with a small group of smart and motivated students) and then fail in mainstream education (when you try the same thing with average or below-average students);
it’s not that “opening the borders completely is a good idea”, but a lot of potential value is lost by closing the borders for people who are neither fanatics nor criminals and could easily integrate to the new society.
There is also an opposite bad extreme to contrarians, the various “I fucking love science… although I do not understand it… but I enjoy attacking people on social networks who seem to disagree with the scientific consensus as I understand it” people. The ones who are sure that the professor or the doctor is always right, and that the latest educational fad is always correct.
This enables sanewashing and motte-and-bailey arguments.
This is a very poor conclusion to draw from the Rootclaim debate. If you have not yet read Gwern’s commentary on the debate, I suggest that you do so. In short, the correct conclusion here is that the debate was a very poor format for evaluating questions like this, and that the “obsessive autists” in question cannot be relied on. (This is especially so because in this case, there absolutely was a financial stake—$100,000 of financial stake, to be precise!)
I’m broadly sympathetic to this post. I think a lot of people adjacent to the LessWrong cluster tend to believe contrarian claims on the basis of flimsy evidence. That said, I am fairly confident that Scott Alexander misrepresented Robin Hanson’s position on medicine in that post, as I pointed out in my comment here. So, I’d urge you not to update too far on this particular question, at least until Hanson has responded to the post. (However, I do think Robin Hanson has stated his views on this topic in a confusing way that reliably leads to misinterpretation.)
Rebuttal here!
Anyway, if the message someone received from Hanson’s writings on medicine was “yay Hanson”, and Scott’s response was “boo Hanson,” then I agree people should wait for Hanson’s rebuttal before being like “boo Hanson.”
But if the message that people received was “medicine doesn’t work” (and it appears that many people did), then Scott’s writings should be an useful update, independent of whether Hanson’s-writings-as-intended was actually trying to deliver that message.
The statement I was replying to was: “I’d bet at upwards of 9 to 1 odds that Hanson is wrong about it.”
If one is incorrect about what Hanson believes about medicine, then that fact is relevant to whether you should make such a bet (or more generally whether you should have such a strong belief about him being “wrong”). This is independent of whatever message people received from reading Hanson.
Yeah that’s fair! I agree that they would lose the bet as stated.
What does it mean to claim that these people are contrarians?
Is there a consensus position at all? For any existing policy, you could claim that there is some kind of centrist compromise that it’s a good policy, so people who propose changing policy, like Hanson and Caplan, are defying that compromise. But there is not really any explicit consensus goal of most policies, so claiming existing institutions are a bad compromise because they pursue multiple goals and separating those goals is not in defiance of any consensus. Caplan, Hanson, and Sailer are offensive because they feel we should try to understand the world and try steer it. They may be wrong, but the people opposed to them rarely offer an opposing position, but are rather opposed to any position. It seems to me that the difference between true and false is much smaller than the gap between argument and pseudoscience. Maybe Sailer is wrong, but the consensus position that he is peddling pseudoscience is much more wrong and much more dangerous.
Sailer rarely argues for genetic causes, but leaves that to the psychologists. He believes it and sometimes he uses the hypothesis, but usually he uses the hypotheses 1-4 that Turkheimer, Harden, and Nisbett concede. Spelling out the consequences of those claims is enough to unperson him. Maybe he’s wrong about these, but he’s certainly not claiming to be a contrarian. And people who act like these are false rarely acknowledge an academic consensus. Or compare Jay: it’s very hard to distinguish genetic effects from systemic effects, so when Jay argues that racial IQ gaps aren’t genetic, he is (explicitly!) arguing that they are caused by racial differences in parenting. Sailer often claims this (he thinks it’s half the effect), but people hate this just as much as anything else he says. Calling him a contrarian and focusing attention on one claim seem like an attempt to mislead.
That is a very clear example, but I think something similar is going on in the rest. Guzey seems to have gone overboard in reaction to Matthew Walker’s book Why We Sleep. Did that book represent a consensus? I don’t know, but it was concrete enough to be wrong, which seems to me much better than an illusion of a consensus.
Thanks for the link. While it didn’t convince me completely, it makes a good point that as long as there are some environmental factors for IQ (such as malnutrition), we should not make strong claims about genetic differences between groups unless we have controlled for these factors.
(I suppose the conclusion that the genetic differences between races are real, but also entirely caused by factors such as nutrition, would succeed to make both sides angry. And yet, as far as I know, it might be true. Uhm… what is the typical Ashkenazi diet?)
It’s delicious, is what it is.
If it’s this piece, I would be interested to know why you found it convincing. He doesn’t address (or seem to have even read) any of Brian’s arguments. His argument basically boils down to “but so many people who work for universities think it’s good”.
The next part of the sentence you quote says, “but it got eaten by a substack glitch”. I’m guessing he’s referring to a different piece from Sam Atis that is apparently no longer available?
It’s not that piece. It’s another one that got eaten by a Substack glitch unfortuantely—hopefully it will be back up soon!
What makes you believe that Substack is to blame and not him unpublishing it?
Do you happen to have a copy of it that you can share?
The linked article says:
So the linked article says that Steve Sailer and Emil Kierkegaard are right when they say that there are racial gaps in intelligence based on genetics. Basically, he says there’s a gap but wants to debate about its size.
He thinks it’s very near zero if there is a gap.
He explicitly says that the people who argue that there’s no gap are mistaken to argue that. He argues for the gap being small, not nonexistent. He does not use the term “near zero” himself.
I feel like if there’s one side arguing the genetic gap is x, and one side arguing the genetic gap is 0, the natural dichotomization is whether the genetic gap is larger or smaller than x/2.
Instead of thinking about how you can divide a discussion into two sides you can also focus on “what’s actually true”. In that case, it would make sense to end with an estimation of the size of the real gap.
If we, however, look at “what people argue”, https://www1.udel.edu/educ/gottfredson/30years/Rushton-Jensen30years.pdf assumes the two categories culture-only (0% genetic–100% environmental) and the hereditarian (50% genetic–50% environmental).
Jay M defines the environmental model as <33% genetic and the genetic model as >66% genetic. What Rushton called the hereditarian position is right in the middle between Jay’s environmental and genetic model.
Definitely relevant to figure out what’s true when one is only talking about the object level, but the OP was about how trustworthy contrarians are compared to the mainstream rather than simply being about the object level.
Generally, hedgehogs are less trustworthy than foxes. If you see a debate as being about either believing in a mainstream hedgehog position or a contrarian hedgehog position you are often not having the most accurate view.
Instead of thinking that either Matthew Walker or Guzey is right, maybe the truth lies somewhere in the middle and Guzey is pointing to real issues but exaggerating the effect.
I think most of the cases that the OP lists are of that nature that there’s an effect and that the hedgehog contrarian position exaggerates that effect.
I doubt you could have picked a worse example to make your point that contrarian takes are usually wrong than racial differences in IQ/intelligence.
Hmm, this sounds like an awfully contrarian take to me.
I think binary examples are deceptive in the reversed stupidity is not intelligence sense. Thinking through things from first principles is most important in areas that are new or rapidly changing where there are fewer references classes and experts to talk to. It’s also helpful for areas where the consensus view is optimized for someone very unlike you.
There are 2 topics mixed here.
Existence of the contrarians.
Side effects of their existence.
My own opinion on 1 is that they are necessary in moderation. They are doing the “exploration” part in the “exploration-exploitation dilemma”. By the very fact of their existence they allow the society in general to check alternatives and find more optimal solutions to the problems comparing to already known “best practices”. It’s important to remember that almost everything that we know now started from some contrarian—once it was a well established truth that Monarchy is the best way to rule the people and democrats were dangerous radicals.
On the 2 - it is indeed a problem that contrarian opinions are more interesting on average, but the solution lies not in somehow making them less attractive—but by making more interesting and attractive conformist materials. That’s why it is paramount to have highly professional science educators and communicators, not just academics. My own favorites are vlogbrothers (John and Hank Green) in particular and their team in Complexly in general.
You might also be interested in Scott’s 2010 post warning of the ‘next-level trap’ so to speak: Intellectual Hipsters and Meta-Contrarianism
I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they’re exaggerating for effect. They may say something in a forceful tone and imply that it’s completely obvious and irrefutable, but that’s rhetoric rather than truth.
In fact, if they’re saying “the mainstream and common belief should move some amount toward this idea”, I tend to agree with a lot of it (not all—there’s a large streak of “contrarian success on some topics causes very strong pressure toward more contrarianism” involved).
Thank you for writing this! It expresses in a clear way a pattern that I’ve seen in myself: I eagerly jump into contrarian ideas because it feels “good” and then slowly get out of them as I start to realize they are not true.
I agree that contrarians ’round these parts are often wrong more often than academic consensus, but the success of their predictions about AI, crypto, and COVID prove to me its still worth listening to them, trying to be able to think like them, and probably taking their investment advice. That is, when they’re right, they’re right big-time.
contrarianism is not what lead people to be right about those things.
Nice post. Gets at something real.
My feeling is that a lot of contrarians get “pulled into” a more contrarian view. I have noticed myself in discussions propose a (specific, technical point correcting a detail of a particular model). Then, when I talk to people about it I feel like they are trying to pull me towards the simpler position (all those idiots are wrong, its completely different from that). This happens with things like “ah, so you mean...”, which is very direct. But also through a much more subtle process, where I talk to many people, and most of them go away thinking “Ok, specific technical correction on a topic I don’t care about that much.” and most of them never talk or think about it again. But the people who get the exaggerated idea are more likely to remember.
I’m convinced by the mainstream view on COVID origins and medicine.
I’m ambivalent on education—I guess if done well, it’d consistently have good effects, and that currently, it on average has good effects, but also the effect varies a lot from person to person, so simplistic quantitative reviews don’t tell you much. When I did an epistemic spot check on Caplan’s book, it failed terribly (it cited a supposedly-ingenious experiment that university didn’t improve critical thinking, but IMO the experiment had terrible psychometrics).
I don’t know enough about sleep research to disagree with Guzey on the basis of anything but priors. In general, I wouldn’t update much on someone writing a big review, because often reviews include a lot of crap information.
I might have to read Jayman’s rebuttal of B-W genetic IQ differences in more detail, but at first glance I’m not really convinced by it because it seems to focus on small sample sizes in unusual groups, so it’s unclear how much study noise, publication bias and and sampling bias effects things. At this point I think indirect studies are getting obsolete and it’s becoming more and more feasible to just directly measure the racial genetic differences in IQ.
However I also think HBDers have a fractal of bad takes surrounding this, because they deny the phenotypic null hypothesis and center non-existent abstract personality traits like “impulsivity” or “conformity” in their models.
I couldn’t swallow Eliezer’s argument, I tried to read Guzey but couldn’t stay awake, Hanson’s argument made me feel ill, and I’m not qualified to judge Caplan.
You contrast the contrarian with the “obsessive autist”, but what if the contrarian also happens to be an obsessive autist?
I agree that obsessively diving into the details is a good way to find the truth. But that comes from diving into the details, not anything related to mainstream consensus vs contrarianism. It feels like you’re trying to claim that mainstream consensus is built on the back of obsessive autism, yet you didn’t quite get there?
Is it actually true that mainstream consensus is built on the back of obsessive autism? I think the best argument for that being true would be something like:
Prestige academia is full of obsessive autists. Thus the consensus in prestige academia comes from diving into the details.
Prestige academia writes press releases that are picked up by news media and become mainstream consensus. Science journalism is actually good.
BTW, the reliability of mainstream consensus is to some degree a self-defying prophecy. The more trustworthy people believe the consensus to be, the less likely they are to think critically about it, and the less reliable it becomes.
It seems like you’re spanning up three different categories of thinkers: Academics, public intellectuals, and “obsessive autists”.
Notice that the examples you give overlap in those categories: Hanson and Caplan are academics (professors!), while the Natália Mendonça is not an academic, but is approaching being a public intellectual by now(?). Similarly, Scott Alexander strikes me as being in the “public intellectual” bucket much more than any other bucket.
So your conclusion, as far as I read the article, should be “read obsessive autists” instead of “read obsessive autists that support the mainstream view”. This is my current best guess—”obsessive autists” are usually not under much strong pressure to say politically palatable things, very unlike professors.
I think I’ve noticed some sort of cognitive bias in myself and others where we are naturally biased towards “contrarian” or “secret” views because it feels good to know something that others don’t know / be right about something that so many people are wrong about.
Does this bias have a name? Is this documented anywhere? Should I do research on this?
GPT4 says it’s theIllusion of asymmetric insight, which I’m not sure is the same thing (I think it is the more general term, whereas I’m looking for one specific to contrarian views).(Edit: it’s totally not what I was looking for)Interestingly, it only hasone hit on lesswrong.I think more people should know about this (the specific one about contrarianism) since it seems fairly common.Edit: The illusion of asymmetric insight is totally the wrong name. It seems closer to the illusion of exclusivity although that does not feel right (that is a method for selling products, not the name of a cognitive bias that makes people believe in contrarian stuff because they want to be special).
Please don’t write comments all in boldface. It feels like you’re trying to get people to pay more attention to your comment than to others, and it actually makes your comment a little harder to read as well as making the whole thread uglier.
Noted, thanks.
It may be useful to write about how a consumer can distinguish contrarian takes from original insights. Until that’s a common skill, there will remain a market for contrarians.
It all depends on the topic. It’s unlikely that the consensus about objective fields like mathematics or physics are wrong. The more subjective, controversial, and political something is, and the more profit and power lies in controlling the consensus, the more skepticism is appropriate.
The bias on Wikipedia (used as an example) is correlated in this manner, CW topics have a lot of misinformation, while things that people aren’t likely to feel strongly about are written more honestly.
If some redpills or blackpills turned out to be true, or some harsh-sounding aspects of reality related to discrimination, selection, biases or differences in humans turned out to be true, or some harsh philosophy like “suffering is good for you”, “poverty is negatively correlated with virtuous acts” or “People unconsciously want to be ruled” turned out to be true, would you hear about it from somebody with a good reputation?
I also think it’s worth noting that both the original view and the contrarian view might be overstated. That education isn’t useless nor as good as we make it out to be. I’ve personally found myself annoyed at exaggerations like “X is totally safe, it never has any side-effects” or “People basically never do Y, it is less likely than being hit by lightning” (despite millions of people participating because it’s relevant for their future, thousands of which are mentally ill by statistic necessity). This has made me want to push back, but the opposing evidence is likely exaggerated or cherry-picked as well, since people feel strongly about various conflicts.
The optimization target is Truth only to the extent that Truth is rewarded. If something else has a higher priority, then the truth will be distorted. But due to the broken-windows theory, it might be better to trust society too much rather than too little. I don’t want to spread doubt, it might be harmful even in the case that I’m right.
The roundness of the earth is not a point upon which any political philosophy hinges, yet flat earthism is a thing. The roundness is not subjective, it isn’t controversial, and it does not advance anyone’s economic interest. So why do people engage in this sort of contrarianism? I speculate that the act of being a contrarian signals to others that you question authority. The bigger the consensus challenged, the more disdain for authority shown. One’s willingness to question authority is often used as a proxy for “independent thinking.” The thought is that someone who questions authority might be more likely to accept new evidence. But questioning authority is not the same as being an independent thinker, and so, when taken to its extreme, it leads to denying reality, because isn’t reality the ultimate authority?
That’s a great example of something which doesn’t follow the dynamics that I mentioned! I think that your example relates to the dynamics of cults and religions. They do blend into politics a little bit as they’re fed by a distrust in the system and in authorities in general. But I agree that the earth being flat would be a strange thing to lie about, unlike microchips, electromagnetic harassment, UFOs, lizard-people, and the cure of cancer.
There’s other related ideas like “secret knowledge”, but at this level, we’re practically talking about symptoms of paranoid skizophrenia. But flat earthers seem more common than the rate of skizophrenia would suggest, so I’m not sure how to explain this gap.
Maybe these “independent thinkers” just hate authority, by which I mean that they’re not the non-conformists that they appear to be. But being entirely alone in ones beliefs is quite painful, so if the only group which shares ones hatred of authority believes that the earth is flat, maybe the desire to fit in is strong enough that one deceive themselves. And people who believe in one conspiracy seem likely to believe in multiple theories, which is very likely an important piece of information if you want to understand these people.
Another guess is that nihilism is too painful. You know that “I want to believe” poster? I think we should take the word “want” literally. If you can’t believe in god, but find the idea of an inert, material universe too painful to bear, you will look for signs of magic or anything interesting. Luck, karma, aura, chakra, fate, - anything to spice up your life, anything to add additional meaning and possibilities to life. A large-scale conspiracy could fill this need. You’d also go from being a crazy loser to being a warrior fighting against the corrupt, deceptive system. In other words, a conspiracy like this being true would elevate the importance of the individual.