It occurs to me that a large part of my rejecting theism (Christianity) had nothing to do with the claims of religion itself, but rather was based on a study of human psychology and cognition. That is, while my study of the historical evidence for Christianity did help assign low probabilities to traditional biblical accounts like Noah’s Ark, the Exodus from Egypt, Jesus’ resurrection, etc., the nail in the coffin seems to be my observation of the tendency humans possess to believe in some sort of religion, regardless of the particular details.
I’ve noticed this in the area of conspiracy theorists as well. The biggest reason I tend not to accept traditional conspiracy theories (9/11 was planned by insiders, multiple shooters in Dealey Plaza, etc.) is unrelated to the details of the particular theory. My biggest reason for rejecting conspiracies is my observation that humans are prone to believing them.
I’m wondering how Bayesian probability treats this sort of ‘evidence’—evidence that is unrelated to the objective details of the question at hand.
I think it’s related to this: http://en.wikipedia.org/wiki/Reference_class_problem Once you can categorize a theory into a certain reference class you use your priors for that reference class rather than for similarly improbably problems of a more generic nature
First of all, in the absence of any real evidence, conspiracy theories and religion can mostly be rejected simply on the basis of priors. However, as a general rule, many people believing something, especially if it’s a vast majority, is decent evidence for a claim, and so the fact that many people believe something should cause you to update your subjective probability upward, to some extent.
The important point here is that the strength of that evidence has a very significant dependence on how people attained those beliefs. If you were to find that those beliefs were attained via some generally reliable method, e.g. scientific experimentation and dissemination via broad scientific consensus, this would cause you to further update your subjective probability upward, because it would be a strong sign that their beliefs correlate with reality.
On the other hand, if you were to find that their beliefs were obtained by, for example, using a ouija board, you would update your subjective probability back down because the evidential value of the other person’s beliefs would fall to pretty much zero.
Basically, finding out the process by which people obtained their beliefs partly or wholly screens off the evidential weight of the fact that they have those beliefs.
If many people got the same belief from Ouija boards independently, I’d update my belief in Ouija boards.
Conversely, if millions of people believe the result of a single poorly designed study, that does not make their number very relevant.
I think what should undermine belief in religions or conspiracy theories is that these people all read the same books and watch the same YouTube videos.
If many people got the same belief from Ouija boards independently, I’d update my belief in Ouija boards.
I envisaged them all in the same room using the same Ouija board at once, but that image isn’t really the most obvious interpretation of what I said.
Nevertheless, you’re not right about the Ouija board case.
First of all, it also depends on how many different beliefs could result from the Ouija boards, and how many people ended up with different beliefs. For example, if 40% of the world’s population independently used an Ouija board to conclude the truth of a certain religion, and another 40% of the world’s population independently used an Ouija board to conclude the falsity of that religion, this would not be significant evidence despite the large numbers involved.
On the other hand, if 80% of people tried it and 79% of people ended up with the same belief, then you definitely need to take a significant look at Ouija boards and how people are using them and try to figure out what’s going on there. However, I wouldn’t really take it as evidence in favour of Ouija boards, because while systematically inducing certain beliefs is a strong sign of precision, it is not any kind of sign of accuracy.
Unless you have other reasons to support accuracy of Ouija boards, such as a well-understood causal mechanism, or being consistently correct about various known facts (that are unknown to the test subjects) in double-blind experiments, you cannot use them as evidential support for either belief.
Conversely, if millions of people believe the result of a single poorly designed study, that does not make their number very relevant.
Your point about the importance of independence between people is an important one, but I think you still have to pay significant attention to the specific nature of the method by which the belief was attained.
If, as in your example, millions of people believe due to exact same study, your evidence now consists of a single scientific study which managed to become very widely known and believed, as compared to many studies which do not.
Both of these factors are correlated with truth, but the a priori weight of millions of people has been screened off;
relative to that prior weight you should probably update your beliefs only slightly upward after finding out that the source of the peoples’ belief was a single study.
On the other hand, if you find out that each person formed their belief on the basis of an independently performed experiment then you would view it as significant evidence, although it would behoove you to study the details of the experiment they performed in order to work out whether there is a systematic error in the experiment or in their interpretation of that experiment.
I think what should undermine belief in religions or conspiracy theories is that these people all read the same books and watch the same YouTube videos.
An easy counter-argument in the case of religion is that those people all read those same books because they’re religious, rather than all being religious because they read those same books. If the religion causes them to read those books then it’s irrelevant that they read the same books.
In the case of religion, I think socialization and indoctrination are probably the more significant underlying causes. However, those are clearly not generally reliable methods for truth, especially when you consider that the root sources of those religions are also not based on particularly reliable paths to knowledge.
In a Bayesian framework, you are simply assigning a higher prior probability to x believes that A because of cognitive bias B than x believes that A because A is true and through causal mechanism C, is determining x’s belief in A, or some similar set of hypothesis. As long as these priors approximately reflect base rates (in addition to object level arguments you have examined for yourself), it seems like a decent way to go about things to me.
People are more likely to believe true things, so someone believing something is evidence that it’s true. If you find out that they’re especially likely to believe this even if it’s not true, but not proportionately more likely to believe it if it is, then the fact that they believe it is not as strong evidence. Thus, if it’s a given that they believe it, finding out that they’d believe it either way is evidence against it.
I’d throw in a modifier that people are most likely to believe true things about areas where they have direct experience and get feedback. It’s something like near and far, and the near has to be very near. Give extra points if the experience is recent.
The less a theory meets those constraints, the less you should think belief is evidence that it’s true.
Modus ponens can be demonstrated to be a valid assumption by drawing up a truth table. How do you demonstrate that “people are more likely to believe true things”?
Using truth tables seems more complicated than modus ponens. I would expict it would be better to use modus ponens to justify truth tables as opposed to the other way around. Regardless, you need to start with something. You can’t justify modus ponens to a rock.
If you don’t think people are more likely to believe true things, then how do you justify any of that stuff you just said being true?
People tend not to believe things because they’re true, but for some other reason.
Pr(People Believe | True) < Pr(People Believe | Some other explanation)? I would hazard to guess that the number of untrue things people have believed all throughout human history overshadows the number of things they (we) have believed that were actually true.
It’s a bit of an ad hominem, but logical fallacies can be viewed as weak Bayesian evidence.
I don’t reject theism, for instance, because people believe it. Rather, I’ve noticed the lynchpin of my disbelief seems to have as much to do with what I’ve learned about why people tend to believe in things like religion as it does an evaluation of the actual claims of any given religion.
All but the most ignorant believe in conspiracy theories. For example, I believe that Julius Caesar was assassinated by a conspiracy in the Roman Senate, that a conspiracy in the Thai Army led to the coup in May of this year, that most of the narcotics consumed in the United States are sold by illegal, semi-secret networks of supply and distribution, and so on. But none of these are called “conspiracy theories.” Rather, a “conspiracy theory” is a conspiracy theory that the speaker wishes to ridicule. For example, 9-11 “Truthers” are mocked for believing in a conspiracy theory, but the correct version of the story is also a conspiracy theory—but a conspiracy by an al-Qaeda cell, not the US government.
Rather, a “conspiracy theory” is a conspiracy theory that the speaker wishes to ridicule.
I don’t think this is accurate.
What characterizes the things generally called “conspiracy theories” is not only that the people talking about them want to ridicule them. They also tend to have the following features not widely shared by more credible theories with conspiracies in:
They have very little evidence directly supporting them. (Advocates tend to focus on alleged evidence against rival “mainstream” theories.)
They involve large conspiracies, with many people with varied interests, successfully keeping a big secret unexposed, even though it would take only a small slip-up or leak for it to get out.
They require those many people to be consistently villainous in ways there’s little reason (outside the conspiracy theory) to think they are.
So, for instance, “9-11 truthers” can’t (e.g.) point to leaked government memos saying “let’s fly planes into buildings and say it was terrorists”; rather, AIUI they argue that (1) the “usual” explanations are no good because being hit by a plane can’t actually cause a building to collapse as the WTC towers did and (2) that means the government is covering something up so they probably planned it all. This theory requires that a whole lot of people in the US government knowingly betrayed their country and killed thousands of innocent people, and did it without getting caught, and no one involved blew the whistle.
Obviously, any false theory isn’t going to be adequately supported by the facts. But I dispute that they necessarily have any pattern of features, and I suggest any apparent pattern is more read in by people trying to denigrate.
For example, why would a 9-11 conspiracy require a massive number of government operatives to know? Obviously, it could have been carried out by a small terrorist cell taking over commercial airliners—a conspiracy only requires a single government agent telling them to do it. Now, in the absence of specific information, Ockhams Razor should tell us that the government agent is superfluous, but I suppose it depends on your priors. Governments have been known to do things like this. The US government does have secretive programmes. Suppose a large bomb went off tomorrow in central Moscow, and the government blamed “Galician fascist terrorists.” Due to my priors, I would give a high probability to it being an inside job, so if your priors for the USG are sufficiently faulty as to equate it with Russia, you might be so foolish as to become a 9-11 truther.
I also think you are unfair because when “conspiracy theories” get good evidence, they stop being called such. For example, it was a conspiracy theory to claim that the British Communist Party was secretly in the pay of Moscow, right up to the minute the Kremlin archives were opened. Then it just became historical fact. So there’s a selection bias at work.
Conspiracy theories become pathological when absence of evidence is taken as evidence of a cover up. But legitimate belief in conspiracy theories normally comes down to priors. It is equally pathological to say you don’t believe in conspiracy theories, but then claim to be unsurprised by e.g. the Snowden revelations. If the NSA secretly undermining public cryptography without anyone finding out was part of your model all along, then what on earth do you mean you don’t believe in conspiracy theories? I find it mostly adds up to nothing more than a pose—I am such a man of the world that I never have to update my model.
a conspiracy only requires a single government agent telling them to do it
Maybe I’m misusing the terminology somehow, but I wouldn’t regard a theory that says the September 11 attack was carried out by a generic terrorist group asked to do it by a single rogue government official acting alone as a “conspiracy theory”, and I don’t think that’s close to what “9-11 truthers” mostly think. (Also, I’m not sure how it would work. Most terrorist organizations don’t take instructions from random rogue government officials.)
Isn’t the usual “truther” story that the US government—meaning something like “the President, some of his senior staff, and enough people further down to make it actually happen”—were responsible, with the goal of justifying an invasion of Iraq or stirring up support for the administration on the back of fear and anger, or something like that?
(Maybe I’m misunderstanding what you mean by “a single government agent”?)
if your priors for the USG are sufficiently faulty as to equate it with Russia, you might be so foolish as to become a 9-11 truther.
Yes, you might. Were you expecting me to disagree? My claim isn’t that (what are commonly called) conspiracy theories are all so insane that no one could embrace them unless seriously mentally disordered. It’s that they have enough features in common, other than being disapproved of by the person mentioning them, that “conspiracy theory” isn’t a mere term of abuse.
(For my part, though my opinion of the Russian government is extremely negative, I would not at all expect it to start massacring random Russian citizens in order to manufacture outrage against “Galician fascist terrorists”, not least because they’d be likely to get caught and I’d expect them not to want that.)
when “conspiracy theories” get good evidence, they stop being called such.
I agree that there’s (so to speak) an evaluative element in the term “conspiracy theory”. But I don’t think it’s what you say it is (i.e., that the only difference is whether the person using the term wants to ridicule the theory in question). It’s more like the evaluative element in the term “murder”. You don’t call a killing a murder if you think it was justified, but that doesn’t mean that “murder” just means “killing the speaker disapproves of”. Most opponents of the death penalty don’t call executions murders. Most pacifists don’t call deaths in war murders. (Some might, in both cases.)
Conspiracy theories become pathological when absence of evidence is taken as evidence of a cover up.
And it seems to me that this is precisely part of what distinguishes “conspiracy theories” from other theories involving conspiracies.
If the NSA secretly undermining public cryptography [...] was part of your model [...] what on earth do you mean you don’t believe in conspiracy theories?
Some theories about NSA attacks on crypto would have been rightly classed as conspiracy theories, although unusually plausible ones because, e.g., doing things of that general sort and keeping them secret is the NSA’s job. Some of those now turn out to be true. So something formerly classed as a conspiracy theory is true, and conventionally is no longer called a conspiracy theory. I have no problem with any of this, and I don’t see why anyone else should have either.
I have the impression that you have a not-quite-correct impression of my opinions, so let me make some things more explicit. I think that for something to be called a “conspiracy theory” it is neither necessary nor sufficient for it to involve a conspiracy and be thought ridiculous by the person so calling it. Rather, it needs to be lacking in evidence, explain this in terms of an implausible large-scale conspiracy to keep it secret, require the people involved to be more evil than there’s other reason to think they are, and be thought untrue by the person referring to it. When a conspiracy theory turns out to be true after all, it is simply a conspiracy and belief in it is no longer called a “theory” (unsurprisingly as the word “theory” in common use is restricted to things that don’t have a convincing preponderance of evidence in their favour; this differs from scientific usage). And I think some things classified as conspiracy theories turn out to be true, but relatively few because to be a conspiracy theory something needs to involve unlikely elements and be widely thought untrue.
So, for instance, if someone believed a few years ago that the NSA was deliberately attempting to insert backdoors into widely available cryptographic software, that would have been something of a borderline case. There wasn’t a lot of evidence; for the theory to be true the activity would indeed have had to be kept secret by a lot of people, but it was actually pretty plausible that they’d do so; it would maybe require a slightly higher level of evil than a naive observer might expect from an agency like the NSA, but not much; and it was thought untrue by a lot of people. Now we have better evidence that they did it, which has raised general expectations of their level of evil, and fewer people think it’s untrue, so this has made the shift from “maybe just about a conspiracy theory” to “not really a conspiracy theory, just a plausible and probably correct theory about a conspiracy”. (The Snowden revelations have maybe pushed it in the other direction a little, by reducing our confidence in the NSA’s ability to keep such things covered up. But I think the direction of the overall effect is clear.)
I think the way you use “conspiracy theory” is quite a good one*, but somewhat non-standard. In particular, you state that ideas described as “conspiracy theories” are sometimes correct. I think Brillyant (to whom I was originally replying) gives a much more standard description when he calls them “ludicrous” and “absurd.” For example, wiki states that the phrase:
has acquired a derogatory meaning, implying a paranoid tendency to see the influence of some malign covert agency in events. The term is often used to dismiss claims that the critic deems ridiculous, misconceived, paranoid, unfounded, outlandish, or irrational.
If the ordinary connotation of “conspiracy theory” was “low-probability hypothesis involving a conspiracy” I would not have objected to its use.
*although I think that the “evil” part needs work.
It looks to me as if Brillyant is using the term to mean something close to “ludicrous, absurd theory involving a conspiracy”. I remark firstly that this isn’t so far from “low-probability hypothesis involving a conspiracy” and secondly that it’s entirely possible that Brillyant hasn’t sat down and thought through exactly what shades of meaning s/he attaches to the term “conspiracy theory”, and that on further reflection s/he would define that term in a way that clearly doesn’t amount to “theory I want to make fun of”.
I appreciate that you’re concerned about equivocation where someone effectively argues “this is a conspiracy theory (= theory with a conspiracy in), therefore it should be treated like a conspiracy theory (= ludicrous absurd theory with a conspiracy in)”, but I don’t see anyone doing that in this thread and given how firmly established the term is I don’t think there’s much mileage in trying to prevent it by declaring that “conspiracy theory” simply means “theory with a conspiracy in”.
(In particular I don’t think Brillyant is engaging in any such equivocation. Rather, I think s/he is, or would be after more explicit reflection, saying something like this: 1. People like to believe in conspiracies. 2. Therefore, the fact that a theory is believed by quite a lot of people is less evidence when the theory features a conspiracy than it normally would be. 3. So when someone offers up a theory that isn’t terribly plausible on its face and that involves a conspiracy, my initial estimate is that it’s unlikely to be true; the best explanation of the fact that I’m being invited to consider it is that its advocates have fallen prey to their inbuilt liking for theories involving conspiracies. -- This doesn’t oblige Brillyant to disbelieve every theory with a conspiracy in, because some actually have good evidence or are highly plausible for other reasons. Those tend not to be the ones labelled “conspiracy theory”.)
I think you misunderstand my concern; perhaps I have not been clear enough. I am not so much worried about equivocation, as I am worried by precisely the 3-step process which you describe. And I am particularly worried about people going through that process, labelling something a “conspiracy theory,” then the theory turns out to be true, and they never reassess their premises.
Let’s restate your process in more neutral language.
For reasons of specialisation, partial information, etc, I treat the fact that lots of people believe in a theory as partial evidence in its favour.
Some people have a higher prior than me for the existence of conspiracies.
Therefore if a theory involving a conspiracy is believed by quite a lot of people, it may be that this belief is due to their higher prior for conspiracies, not any special knowledge or expertise that I need to defer to.
Therefore I treat their belief in the theory as less evidence than normal, on the basis that if I had the evidence/expertise/etc that they do, I would be less likely than them to conclude that there is a conspiracy.
So if someone offers an implausible-seeming theory to me involving a conspiracy, I discount it and conclude that its advocates just have a high prior for conspiracies.
Suddenly, this doesn’t look like a sound epistemological process at all. Steps (1) and (2) are fine, but (3), (4) and (5) go increasingly off the rails. It looks like you are deliberately shielding your anti-conspiracies prior, by discounting (even beyond their initial level of plausibility) theories that might challenge it. And if, on those occasions that a conspiracy is eventually proven, you refuse to update your prior on the likelihood of conspiracies (by insisting that such-and-such a theory doesn’t really count as a conspiracy theory, even though, at the time, you were happy to label it as such), then I would say that the process has become truly pathological, just as much as that of a “conspiracy theorist.”
Consider: why do some people have that higher prior? Mightn’t that higher prior be itself part of their tacit knowledge and expertise—in the same way that a doctor’s prior on the cause of a set of symptoms is not because he ‘likes’ to diagnose people with tuberculosis, but due to his own updates on past experience to which you are not privy. Aren’t we doing precisely the wrong thing by discounting the theory in response to the prior?
None of this means that you should become a 9-11 Truther, of course. But consider the 1999 Moscow bombings. I don’t have any particular evidence about the events, but there’s a plausible case that they were an FSB conspiracy. Shouldn’t that make you more willing to believe the hypothetical in the grandparent than otherwise? In my own experience, people who are most likely to believe in conspiracy theories are those who had their formative experiences in dictatorial countries where there really are lots of conspiracies—and so they subsequently see them everywhere. But by symmetry, it follows that those of us brought up in the West will be too reluctant to see conspiracies elsewhere.
(I’m not sure that #2 is the right formulation. A lot of people don’t think in terms sufficiently close to Bayesian inference that talking about their “priors” really makes sense. I’m not sure this is more than nit-picking, though.)
I agree that #3,4,5 “go increasingly off the rails” but I think what goes off the rails is your description, as much as the actual mental process it aims to describe. Specifically, I think you are making the following claims and blaming them on the term “conspiracy theory”:
That when someone thinks something is a “conspiracy theory” they discount it not only in the sense of thinking it less likely than they otherwise would have, but in the stronger sense of dismissing it completely.
That they are then immune to further evidence that might (if they were rational) lead them to accept the theory after all.
That if the theory eventually turns out to have been right, they don’t update their estimate for how much to discount theories on account of being suspiciously conspiracy-based.
Now, I dare say many people do do just those things. After all, many people do all kinds of highly irrational things. But unless I’m badly misreading you, you are claiming specifically that I and Brillyant do them, and you are laying much of the blame for this on the usage of the term “conspiracy theory”, and I think both parts of this are wrong.
Mightn’t that higher prior be itself part of their tacit knowledge and expertise
Yup. But the answer to that question is always yes, and therefore tells us nothing. (Mightn’t a creationist’s higher prior on the universe being only 6000 years old be part of their tacit knowledge and expertise? It might be, but I wouldn’t bet on it.)
But by symmetry, it follows that those of us brought up in the West will be too reluctant to see conspiracies elsewhere.
I don’t think the symmetry is quite there. People brought up in totalitarian countries who then move to liberal democracies see too many conspiracies. No doubt people brought up in liberal democracies who then move to totalitarian countries see too few, but it could still be that people brought up in totalitarian countries who stay there and people brought up in liberal democracies who stay there both see approximately the right number of conspiracies.
I wouldn’t regard a theory that says the September 11 attack was carried out by a generic terrorist group asked to do it by a single rogue government official acting alone as a “conspiracy theory”,
I remember September tenth, and if you’d said that to me then, I’m not sure I would have called it a conspiracy theory (I might have), but I certainly would have thought you were wildly overconcerned.
But you’d probably have said the same if I’d said that al-Qaeda terrorists were about to take over lots of planes and fly them into buildings, with thousands of lives lost. And yet that does in fact appear to have happened, and no one calls it a “conspiracy theory”.
So the fact that saying the day before that terrorists asked to do it by a single rogue government official were about to take over planes and fly them into buildings would have sounded wildly overconcerned and conspiracy-theory-ish can’t make believing now that that’s what happened a conspiracy theory.
(For the avoidance of doubt: I do not in fact think that the people who flew planes into buildings on “9/11” were asked to do so by any official of any government, rogue or otherwise.)
So, in what way is what you are saying relevant to the debate we are having? The way you use the term “conspiracy theory” obviously isn’t the way Brillyant uses it.
I am suggesting that the way he is using “conspiracy theory” amounts to a fnord . It doesn’t appear to have much more content than “conspiracy theory I don’t believe in” and as such is as much about him as it is about the theory. As.such I suggest that discussion is unlikely to be productive unless better terms are used.
I pointed out a couple particular theories that are (a) ludicrous and (b) have a significant number of people who believe them.
I’d agree “conspiracies” happen all the time. But believing there were multiple gunmen in Dealey Plaza or that Bush ordered 9/11 is a special case of absurd belief.
There seems to be a pattern recognition malfunction that takes place. “Conspiracies” happen all the time (i.e. people lie, governments have covert programs, office politics, etc.) and people seem to want to avoid being naive about the “real” reasons and cause for significant events like 9/11 or the Kennedy assassination.
It might just help to resolve cognitive dissonance between (a) powerful forces (like the gov’t) are generally in solid control, pulling a lot of strings and not being entirely up front about it and (b) major stuff happens that the gov’t couldn’t/didn’t prevent.
“real” reasons and cause for significant events like 9/11 or the Kennedy assassination.
Also in both the examples above the official explanation is politically inconvenient for a lot of people. For example, people like to think of JFK as a left wing martyr, thus him being killed by a communist is rather inconvenient to this narrative.
In 2013 there was a poll about conspiracy theories. They provide contingency tables that show relationships between beliefs in various conspiracy theories and voting in 2012 US presidential election, political ideology, gender, party affiliation, race and age. Looking at these tables it seems that belief in JFK conspiracy is somewhat similar across the political spectrum, and, maybe surprisingly, it was very liberal people (and not very conservative ones) who were most likely to agree with the official explanation. Obviously, grouping all JFK conspiracy theories into one option loses a lot of information, as liberals and conservatives would probably differ in which ones they find the most appealing. Moreover, neither liberals, nor conservatives are homogeneous groups, and this poll does not show differences among the subgroups (e.g. geographical or some other kind) that might exist.
Looking at these tables it seems that belief in JFK conspiracy is somewhat similar across the political spectrum, and, maybe surprisingly, it was very liberal people (and not very conservative ones) who were most likely to agree with the official explanation.
I wonder how many of them know Oswald was a communist.
It occurs to me that a large part of my rejecting theism (Christianity) had nothing to do with the claims of religion itself, but rather was based on a study of human psychology and cognition. That is, while my study of the historical evidence for Christianity did help assign low probabilities to traditional biblical accounts like Noah’s Ark, the Exodus from Egypt, Jesus’ resurrection, etc., the nail in the coffin seems to be my observation of the tendency humans possess to believe in some sort of religion, regardless of the particular details.
I’ve noticed this in the area of conspiracy theorists as well. The biggest reason I tend not to accept traditional conspiracy theories (9/11 was planned by insiders, multiple shooters in Dealey Plaza, etc.) is unrelated to the details of the particular theory. My biggest reason for rejecting conspiracies is my observation that humans are prone to believing them.
I’m wondering how Bayesian probability treats this sort of ‘evidence’—evidence that is unrelated to the objective details of the question at hand.
Anyone wanna explain?
I think it’s related to this: http://en.wikipedia.org/wiki/Reference_class_problem Once you can categorize a theory into a certain reference class you use your priors for that reference class rather than for similarly improbably problems of a more generic nature
Huh, this is the only reasonable reply in the whole thread and it is largely ignored.
First of all, in the absence of any real evidence, conspiracy theories and religion can mostly be rejected simply on the basis of priors. However, as a general rule, many people believing something, especially if it’s a vast majority, is decent evidence for a claim, and so the fact that many people believe something should cause you to update your subjective probability upward, to some extent.
The important point here is that the strength of that evidence has a very significant dependence on how people attained those beliefs. If you were to find that those beliefs were attained via some generally reliable method, e.g. scientific experimentation and dissemination via broad scientific consensus, this would cause you to further update your subjective probability upward, because it would be a strong sign that their beliefs correlate with reality.
On the other hand, if you were to find that their beliefs were obtained by, for example, using a ouija board, you would update your subjective probability back down because the evidential value of the other person’s beliefs would fall to pretty much zero.
Basically, finding out the process by which people obtained their beliefs partly or wholly screens off the evidential weight of the fact that they have those beliefs.
If many people got the same belief from Ouija boards independently, I’d update my belief in Ouija boards.
Conversely, if millions of people believe the result of a single poorly designed study, that does not make their number very relevant.
I think what should undermine belief in religions or conspiracy theories is that these people all read the same books and watch the same YouTube videos.
I envisaged them all in the same room using the same Ouija board at once, but that image isn’t really the most obvious interpretation of what I said.
Nevertheless, you’re not right about the Ouija board case.
First of all, it also depends on how many different beliefs could result from the Ouija boards, and how many people ended up with different beliefs. For example, if 40% of the world’s population independently used an Ouija board to conclude the truth of a certain religion, and another 40% of the world’s population independently used an Ouija board to conclude the falsity of that religion, this would not be significant evidence despite the large numbers involved.
On the other hand, if 80% of people tried it and 79% of people ended up with the same belief, then you definitely need to take a significant look at Ouija boards and how people are using them and try to figure out what’s going on there. However, I wouldn’t really take it as evidence in favour of Ouija boards, because while systematically inducing certain beliefs is a strong sign of precision, it is not any kind of sign of accuracy.
Unless you have other reasons to support accuracy of Ouija boards, such as a well-understood causal mechanism, or being consistently correct about various known facts (that are unknown to the test subjects) in double-blind experiments, you cannot use them as evidential support for either belief.
Your point about the importance of independence between people is an important one, but I think you still have to pay significant attention to the specific nature of the method by which the belief was attained.
If, as in your example, millions of people believe due to exact same study, your evidence now consists of a single scientific study which managed to become very widely known and believed, as compared to many studies which do not. Both of these factors are correlated with truth, but the a priori weight of millions of people has been screened off; relative to that prior weight you should probably update your beliefs only slightly upward after finding out that the source of the peoples’ belief was a single study.
On the other hand, if you find out that each person formed their belief on the basis of an independently performed experiment then you would view it as significant evidence, although it would behoove you to study the details of the experiment they performed in order to work out whether there is a systematic error in the experiment or in their interpretation of that experiment.
An easy counter-argument in the case of religion is that those people all read those same books because they’re religious, rather than all being religious because they read those same books. If the religion causes them to read those books then it’s irrelevant that they read the same books.
In the case of religion, I think socialization and indoctrination are probably the more significant underlying causes. However, those are clearly not generally reliable methods for truth, especially when you consider that the root sources of those religions are also not based on particularly reliable paths to knowledge.
In a Bayesian framework, you are simply assigning a higher prior probability to x believes that A because of cognitive bias B than x believes that A because A is true and through causal mechanism C, is determining x’s belief in A, or some similar set of hypothesis. As long as these priors approximately reflect base rates (in addition to object level arguments you have examined for yourself), it seems like a decent way to go about things to me.
People are more likely to believe true things, so someone believing something is evidence that it’s true. If you find out that they’re especially likely to believe this even if it’s not true, but not proportionately more likely to believe it if it is, then the fact that they believe it is not as strong evidence. Thus, if it’s a given that they believe it, finding out that they’d believe it either way is evidence against it.
I’d throw in a modifier that people are most likely to believe true things about areas where they have direct experience and get feedback. It’s something like near and far, and the near has to be very near. Give extra points if the experience is recent.
The less a theory meets those constraints, the less you should think belief is evidence that it’s true.
How do you know this?
It’s an implicit assumption that you have to make before you can get anywhere, like modus ponens. From there, you can refine your beliefs more.
Modus ponens can be demonstrated to be a valid assumption by drawing up a truth table. How do you demonstrate that “people are more likely to believe true things”?
Using truth tables seems more complicated than modus ponens. I would expict it would be better to use modus ponens to justify truth tables as opposed to the other way around. Regardless, you need to start with something. You can’t justify modus ponens to a rock.
If you don’t think people are more likely to believe true things, then how do you justify any of that stuff you just said being true?
People tend not to believe things because they’re true, but for some other reason.
Pr(People Believe | True) < Pr(People Believe | Some other explanation)? I would hazard to guess that the number of untrue things people have believed all throughout human history overshadows the number of things they (we) have believed that were actually true.
It’s a bit of an ad hominem, but logical fallacies can be viewed as weak Bayesian evidence.
If you reject theories based on humans believing in them you can’t believe in any theories.
Hm. I feel like that’s oversimplifying it.
I don’t reject theism, for instance, because people believe it. Rather, I’ve noticed the lynchpin of my disbelief seems to have as much to do with what I’ve learned about why people tend to believe in things like religion as it does an evaluation of the actual claims of any given religion.
What do you consider the reason people believe in conspiracy theories?
All but the most ignorant believe in conspiracy theories. For example, I believe that Julius Caesar was assassinated by a conspiracy in the Roman Senate, that a conspiracy in the Thai Army led to the coup in May of this year, that most of the narcotics consumed in the United States are sold by illegal, semi-secret networks of supply and distribution, and so on. But none of these are called “conspiracy theories.” Rather, a “conspiracy theory” is a conspiracy theory that the speaker wishes to ridicule. For example, 9-11 “Truthers” are mocked for believing in a conspiracy theory, but the correct version of the story is also a conspiracy theory—but a conspiracy by an al-Qaeda cell, not the US government.
I don’t think this is accurate.
What characterizes the things generally called “conspiracy theories” is not only that the people talking about them want to ridicule them. They also tend to have the following features not widely shared by more credible theories with conspiracies in:
They have very little evidence directly supporting them. (Advocates tend to focus on alleged evidence against rival “mainstream” theories.)
They involve large conspiracies, with many people with varied interests, successfully keeping a big secret unexposed, even though it would take only a small slip-up or leak for it to get out.
They require those many people to be consistently villainous in ways there’s little reason (outside the conspiracy theory) to think they are.
So, for instance, “9-11 truthers” can’t (e.g.) point to leaked government memos saying “let’s fly planes into buildings and say it was terrorists”; rather, AIUI they argue that (1) the “usual” explanations are no good because being hit by a plane can’t actually cause a building to collapse as the WTC towers did and (2) that means the government is covering something up so they probably planned it all. This theory requires that a whole lot of people in the US government knowingly betrayed their country and killed thousands of innocent people, and did it without getting caught, and no one involved blew the whistle.
Obviously, any false theory isn’t going to be adequately supported by the facts. But I dispute that they necessarily have any pattern of features, and I suggest any apparent pattern is more read in by people trying to denigrate.
For example, why would a 9-11 conspiracy require a massive number of government operatives to know? Obviously, it could have been carried out by a small terrorist cell taking over commercial airliners—a conspiracy only requires a single government agent telling them to do it. Now, in the absence of specific information, Ockhams Razor should tell us that the government agent is superfluous, but I suppose it depends on your priors. Governments have been known to do things like this. The US government does have secretive programmes. Suppose a large bomb went off tomorrow in central Moscow, and the government blamed “Galician fascist terrorists.” Due to my priors, I would give a high probability to it being an inside job, so if your priors for the USG are sufficiently faulty as to equate it with Russia, you might be so foolish as to become a 9-11 truther.
I also think you are unfair because when “conspiracy theories” get good evidence, they stop being called such. For example, it was a conspiracy theory to claim that the British Communist Party was secretly in the pay of Moscow, right up to the minute the Kremlin archives were opened. Then it just became historical fact. So there’s a selection bias at work.
Conspiracy theories become pathological when absence of evidence is taken as evidence of a cover up. But legitimate belief in conspiracy theories normally comes down to priors. It is equally pathological to say you don’t believe in conspiracy theories, but then claim to be unsurprised by e.g. the Snowden revelations. If the NSA secretly undermining public cryptography without anyone finding out was part of your model all along, then what on earth do you mean you don’t believe in conspiracy theories? I find it mostly adds up to nothing more than a pose—I am such a man of the world that I never have to update my model.
Maybe I’m misusing the terminology somehow, but I wouldn’t regard a theory that says the September 11 attack was carried out by a generic terrorist group asked to do it by a single rogue government official acting alone as a “conspiracy theory”, and I don’t think that’s close to what “9-11 truthers” mostly think. (Also, I’m not sure how it would work. Most terrorist organizations don’t take instructions from random rogue government officials.)
Isn’t the usual “truther” story that the US government—meaning something like “the President, some of his senior staff, and enough people further down to make it actually happen”—were responsible, with the goal of justifying an invasion of Iraq or stirring up support for the administration on the back of fear and anger, or something like that?
(Maybe I’m misunderstanding what you mean by “a single government agent”?)
Yes, you might. Were you expecting me to disagree? My claim isn’t that (what are commonly called) conspiracy theories are all so insane that no one could embrace them unless seriously mentally disordered. It’s that they have enough features in common, other than being disapproved of by the person mentioning them, that “conspiracy theory” isn’t a mere term of abuse.
(For my part, though my opinion of the Russian government is extremely negative, I would not at all expect it to start massacring random Russian citizens in order to manufacture outrage against “Galician fascist terrorists”, not least because they’d be likely to get caught and I’d expect them not to want that.)
I agree that there’s (so to speak) an evaluative element in the term “conspiracy theory”. But I don’t think it’s what you say it is (i.e., that the only difference is whether the person using the term wants to ridicule the theory in question). It’s more like the evaluative element in the term “murder”. You don’t call a killing a murder if you think it was justified, but that doesn’t mean that “murder” just means “killing the speaker disapproves of”. Most opponents of the death penalty don’t call executions murders. Most pacifists don’t call deaths in war murders. (Some might, in both cases.)
And it seems to me that this is precisely part of what distinguishes “conspiracy theories” from other theories involving conspiracies.
Some theories about NSA attacks on crypto would have been rightly classed as conspiracy theories, although unusually plausible ones because, e.g., doing things of that general sort and keeping them secret is the NSA’s job. Some of those now turn out to be true. So something formerly classed as a conspiracy theory is true, and conventionally is no longer called a conspiracy theory. I have no problem with any of this, and I don’t see why anyone else should have either.
I have the impression that you have a not-quite-correct impression of my opinions, so let me make some things more explicit. I think that for something to be called a “conspiracy theory” it is neither necessary nor sufficient for it to involve a conspiracy and be thought ridiculous by the person so calling it. Rather, it needs to be lacking in evidence, explain this in terms of an implausible large-scale conspiracy to keep it secret, require the people involved to be more evil than there’s other reason to think they are, and be thought untrue by the person referring to it. When a conspiracy theory turns out to be true after all, it is simply a conspiracy and belief in it is no longer called a “theory” (unsurprisingly as the word “theory” in common use is restricted to things that don’t have a convincing preponderance of evidence in their favour; this differs from scientific usage). And I think some things classified as conspiracy theories turn out to be true, but relatively few because to be a conspiracy theory something needs to involve unlikely elements and be widely thought untrue.
So, for instance, if someone believed a few years ago that the NSA was deliberately attempting to insert backdoors into widely available cryptographic software, that would have been something of a borderline case. There wasn’t a lot of evidence; for the theory to be true the activity would indeed have had to be kept secret by a lot of people, but it was actually pretty plausible that they’d do so; it would maybe require a slightly higher level of evil than a naive observer might expect from an agency like the NSA, but not much; and it was thought untrue by a lot of people. Now we have better evidence that they did it, which has raised general expectations of their level of evil, and fewer people think it’s untrue, so this has made the shift from “maybe just about a conspiracy theory” to “not really a conspiracy theory, just a plausible and probably correct theory about a conspiracy”. (The Snowden revelations have maybe pushed it in the other direction a little, by reducing our confidence in the NSA’s ability to keep such things covered up. But I think the direction of the overall effect is clear.)
Thank you for an interesting reply.
I think the way you use “conspiracy theory” is quite a good one*, but somewhat non-standard. In particular, you state that ideas described as “conspiracy theories” are sometimes correct. I think Brillyant (to whom I was originally replying) gives a much more standard description when he calls them “ludicrous” and “absurd.” For example, wiki states that the phrase:
If the ordinary connotation of “conspiracy theory” was “low-probability hypothesis involving a conspiracy” I would not have objected to its use.
*although I think that the “evil” part needs work.
It looks to me as if Brillyant is using the term to mean something close to “ludicrous, absurd theory involving a conspiracy”. I remark firstly that this isn’t so far from “low-probability hypothesis involving a conspiracy” and secondly that it’s entirely possible that Brillyant hasn’t sat down and thought through exactly what shades of meaning s/he attaches to the term “conspiracy theory”, and that on further reflection s/he would define that term in a way that clearly doesn’t amount to “theory I want to make fun of”.
I appreciate that you’re concerned about equivocation where someone effectively argues “this is a conspiracy theory (= theory with a conspiracy in), therefore it should be treated like a conspiracy theory (= ludicrous absurd theory with a conspiracy in)”, but I don’t see anyone doing that in this thread and given how firmly established the term is I don’t think there’s much mileage in trying to prevent it by declaring that “conspiracy theory” simply means “theory with a conspiracy in”.
(In particular I don’t think Brillyant is engaging in any such equivocation. Rather, I think s/he is, or would be after more explicit reflection, saying something like this: 1. People like to believe in conspiracies. 2. Therefore, the fact that a theory is believed by quite a lot of people is less evidence when the theory features a conspiracy than it normally would be. 3. So when someone offers up a theory that isn’t terribly plausible on its face and that involves a conspiracy, my initial estimate is that it’s unlikely to be true; the best explanation of the fact that I’m being invited to consider it is that its advocates have fallen prey to their inbuilt liking for theories involving conspiracies. -- This doesn’t oblige Brillyant to disbelieve every theory with a conspiracy in, because some actually have good evidence or are highly plausible for other reasons. Those tend not to be the ones labelled “conspiracy theory”.)
I think you misunderstand my concern; perhaps I have not been clear enough. I am not so much worried about equivocation, as I am worried by precisely the 3-step process which you describe. And I am particularly worried about people going through that process, labelling something a “conspiracy theory,” then the theory turns out to be true, and they never reassess their premises.
Let’s restate your process in more neutral language.
For reasons of specialisation, partial information, etc, I treat the fact that lots of people believe in a theory as partial evidence in its favour.
Some people have a higher prior than me for the existence of conspiracies.
Therefore if a theory involving a conspiracy is believed by quite a lot of people, it may be that this belief is due to their higher prior for conspiracies, not any special knowledge or expertise that I need to defer to.
Therefore I treat their belief in the theory as less evidence than normal, on the basis that if I had the evidence/expertise/etc that they do, I would be less likely than them to conclude that there is a conspiracy.
So if someone offers an implausible-seeming theory to me involving a conspiracy, I discount it and conclude that its advocates just have a high prior for conspiracies.
Suddenly, this doesn’t look like a sound epistemological process at all. Steps (1) and (2) are fine, but (3), (4) and (5) go increasingly off the rails. It looks like you are deliberately shielding your anti-conspiracies prior, by discounting (even beyond their initial level of plausibility) theories that might challenge it. And if, on those occasions that a conspiracy is eventually proven, you refuse to update your prior on the likelihood of conspiracies (by insisting that such-and-such a theory doesn’t really count as a conspiracy theory, even though, at the time, you were happy to label it as such), then I would say that the process has become truly pathological, just as much as that of a “conspiracy theorist.”
Consider: why do some people have that higher prior? Mightn’t that higher prior be itself part of their tacit knowledge and expertise—in the same way that a doctor’s prior on the cause of a set of symptoms is not because he ‘likes’ to diagnose people with tuberculosis, but due to his own updates on past experience to which you are not privy. Aren’t we doing precisely the wrong thing by discounting the theory in response to the prior?
None of this means that you should become a 9-11 Truther, of course. But consider the 1999 Moscow bombings. I don’t have any particular evidence about the events, but there’s a plausible case that they were an FSB conspiracy. Shouldn’t that make you more willing to believe the hypothetical in the grandparent than otherwise? In my own experience, people who are most likely to believe in conspiracy theories are those who had their formative experiences in dictatorial countries where there really are lots of conspiracies—and so they subsequently see them everywhere. But by symmetry, it follows that those of us brought up in the West will be too reluctant to see conspiracies elsewhere.
(I’m not sure that #2 is the right formulation. A lot of people don’t think in terms sufficiently close to Bayesian inference that talking about their “priors” really makes sense. I’m not sure this is more than nit-picking, though.)
I agree that #3,4,5 “go increasingly off the rails” but I think what goes off the rails is your description, as much as the actual mental process it aims to describe. Specifically, I think you are making the following claims and blaming them on the term “conspiracy theory”:
That when someone thinks something is a “conspiracy theory” they discount it not only in the sense of thinking it less likely than they otherwise would have, but in the stronger sense of dismissing it completely.
That they are then immune to further evidence that might (if they were rational) lead them to accept the theory after all.
That if the theory eventually turns out to have been right, they don’t update their estimate for how much to discount theories on account of being suspiciously conspiracy-based.
Now, I dare say many people do do just those things. After all, many people do all kinds of highly irrational things. But unless I’m badly misreading you, you are claiming specifically that I and Brillyant do them, and you are laying much of the blame for this on the usage of the term “conspiracy theory”, and I think both parts of this are wrong.
Yup. But the answer to that question is always yes, and therefore tells us nothing. (Mightn’t a creationist’s higher prior on the universe being only 6000 years old be part of their tacit knowledge and expertise? It might be, but I wouldn’t bet on it.)
I don’t think the symmetry is quite there. People brought up in totalitarian countries who then move to liberal democracies see too many conspiracies. No doubt people brought up in liberal democracies who then move to totalitarian countries see too few, but it could still be that people brought up in totalitarian countries who stay there and people brought up in liberal democracies who stay there both see approximately the right number of conspiracies.
I remember September tenth, and if you’d said that to me then, I’m not sure I would have called it a conspiracy theory (I might have), but I certainly would have thought you were wildly overconcerned.
For sure.
But you’d probably have said the same if I’d said that al-Qaeda terrorists were about to take over lots of planes and fly them into buildings, with thousands of lives lost. And yet that does in fact appear to have happened, and no one calls it a “conspiracy theory”.
So the fact that saying the day before that terrorists asked to do it by a single rogue government official were about to take over planes and fly them into buildings would have sounded wildly overconcerned and conspiracy-theory-ish can’t make believing now that that’s what happened a conspiracy theory.
(For the avoidance of doubt: I do not in fact think that the people who flew planes into buildings on “9/11” were asked to do so by any official of any government, rogue or otherwise.)
So, in what way is what you are saying relevant to the debate we are having? The way you use the term “conspiracy theory” obviously isn’t the way Brillyant uses it.
I am suggesting that the way he is using “conspiracy theory” amounts to a fnord . It doesn’t appear to have much more content than “conspiracy theory I don’t believe in” and as such is as much about him as it is about the theory. As.such I suggest that discussion is unlikely to be productive unless better terms are used.
I pointed out a couple particular theories that are (a) ludicrous and (b) have a significant number of people who believe them.
I’d agree “conspiracies” happen all the time. But believing there were multiple gunmen in Dealey Plaza or that Bush ordered 9/11 is a special case of absurd belief.
So, basically you don’t understand the argument he’s making and therefore try to talk about something else?
Not entirely sure.
There seems to be a pattern recognition malfunction that takes place. “Conspiracies” happen all the time (i.e. people lie, governments have covert programs, office politics, etc.) and people seem to want to avoid being naive about the “real” reasons and cause for significant events like 9/11 or the Kennedy assassination.
It might just help to resolve cognitive dissonance between (a) powerful forces (like the gov’t) are generally in solid control, pulling a lot of strings and not being entirely up front about it and (b) major stuff happens that the gov’t couldn’t/didn’t prevent.
Also in both the examples above the official explanation is politically inconvenient for a lot of people. For example, people like to think of JFK as a left wing martyr, thus him being killed by a communist is rather inconvenient to this narrative.
In 2013 there was a poll about conspiracy theories. They provide contingency tables that show relationships between beliefs in various conspiracy theories and voting in 2012 US presidential election, political ideology, gender, party affiliation, race and age. Looking at these tables it seems that belief in JFK conspiracy is somewhat similar across the political spectrum, and, maybe surprisingly, it was very liberal people (and not very conservative ones) who were most likely to agree with the official explanation. Obviously, grouping all JFK conspiracy theories into one option loses a lot of information, as liberals and conservatives would probably differ in which ones they find the most appealing. Moreover, neither liberals, nor conservatives are homogeneous groups, and this poll does not show differences among the subgroups (e.g. geographical or some other kind) that might exist.
I wonder how many of them know Oswald was a communist.