The Pre-Historical Fallacy
One fallacy that I see frequently in works of popular science—and also here on LessWrong—is the belief that we have strong evidence of the way things were in pre-history, particularly when one is giving evidence that we can explain various aspects of our culture, psychology, or personal experience because we evolved in a certain way. Moreover, it is held implicit that because we have this ‘strong evidence’, it must be relevant to the topic at hand. While it is true that the environment did effect our evolution and thus the way we are today, evolution and anthropology of pre-historic societies is emphasized to a much greater extent than rational thought would indicate is appropriate.
As a matter of course, you should remember these points whenever you hear a claim about prehistory:
Most of what we know (or guess) is based on less data than you would expect, and the publish or perish mentality is alive and well in the field of anthropology.
Most of the information is limited and technical, which means that anyone writing for a popular audience will have strong motivation to generalize and simplify.
It has been found time and time again that for any statement that we can make about human culture and behavior that there is (or was) a society somewhere that will serve as a counterexample.
Very rarely do anthropologists or members of related fields have finely tuned critical thinking skills or a strong background on the philosophy of science, and are highly motivated to come up with interpretations of results that match their previous theories and expectations.
Results that you should have reasonable levels of confidence in should be framed in generalities, not absolutes. E.g., “The great majority of human cultures that we have observed have distinct and strong religious traditions”, and not “humans evolved to have religion”. It may be true that we have areas in our brain that evolved not only ‘consistent with holding religion’, but actually evolved ‘specifically for the purpose of experiencing religion’… but it would be very hard to prove this second statement, and anyone who makes it should be highly suspect.
Perhaps more importantly, these statements are almost always a red herring. It may make you feel better that humans evolved to be violent, to fit in with the tribe, to eat meat, to be spiritual, to die at the age of thirty.… But rarely do we see these claims in a context where the stated purpose is to make you feel better. Instead they are couched in language indicating that they are making a normative statement—that this is the way things in some way should be. (This is specifically the argumentum ad antiquitatem or appeal to tradition, and should not be confused with the historical fallacy, but it is certainly a fallacy).
It is fine to identify, for example, that your fear of flying has a evolutionary basis. However, it is foolish to therefore refuse to fly because it is unnatural, or to undertake gene therapy to correct the fear. Whether or not the explanation is valid, it is not meaningful.
Obviously, this doesn’t mean that we shouldn’t study evolution or the effects evolution has on behavior. However, any time you hear someone refer to this information in order to support any argument outside the fields of biology or anthropology, you should look carefully at why they are taking the time to distract you from the practical implications of the matter under discussion.
Are you talking about how things are in the actual sciences of anthropology and evolutionary psychology, or about the pop-evo-psych used by the media and advocates of various causes? If the former, mind linking a few published peer-reviewed articles making this mistake?
I am talking about the pretty much anything that refers without references to the idea that ‘humans evolved to X’ or ‘it is human nature to X’. My background is in anthropology, with comparatively little exposure to evolutionary psychology, but in my experience the ethnographies, papers, and meta-studies that this information would ideally be based on are very clear about their shortcomings.
However, even when your data is a handful of surviving and marginalized ethnic groups, field notes from the 50s, and some promising bones, you can make some good strong statements about behavior, diet, etc. This leads to conclusions that are highly technical and precise, but with limited scope. Most science writers want exactly the opposite—non-technical and broadly applicable. So they latch onto some factoids, make ‘reasonable’ generalizations, and ignore the fact that most anthropologists are very clear on what is known and what is supposed.
There are examples of anthropologists giving interesting theories and arguing with limited data—the aquatic ape hypothesis is a perennial example (now long obsolete). But this is in no way a problem in the field; every science has hypotheses, and within the field the evidence, or lack of evidence, is clearly stated.
There’s your problem right there. :)
Would this be the same “pots not people” anthropologists who insisted there was no population replacement in pre-historic times?
No—The debate has been exaggerated a bit, but the old-school ethnologists who wanted to defend that the peoples that they were studying were the One True Tribe were really very old school. There have been people who are hesitant to ignore the original interpretations, but there have been few if any anthropologists in the last 70 years that would deny that conquest and population movements were a major cultural factor in most areas of the world.
Edit: P.S., Tu quoque and argumentum ad hominem are both fallacies. Trying to suggest that my argument ‘that people should look at the evidence’ is wrong by pointing to people like me who have not looked at the evidence is hardly LessWrong worthy behavior.
While this is obviously true and correct, I find it’s too often trotted out as a counterargument against (what seems to me to be) sensible claims about how we should, in the absence of evidence, hold a prior that mimicking what we approximate to be the ancestral environment will generally lead to better results. Too often there is unproductive back-and-forth between the “nature!” and the “naturalistic fallacy!” crowds.
It’s foolish to refuse altogether of course. Yet, as flying is not ancestral to humans (for which we do have “strong evidence”), you should have a prior expectation for risks and drawbacks—which as it turns out are many: air pressure and quality differences, circadian rhythm disruption, etc. While gene therapy isperhaps going a bit far, you should certainly use pressurized cabins and melatonin, and perhaps make a conscious effort to avoid sitting in your seat for longer than 45 minutes at a stretch.
Whenever someone says “according to my priors, in the absence of evidence we should assume...” I hear “I know nothing about this but that won’t stop me pontificating.”
I understand that what you’re really complaining about is that some people are overconfident in their speculations (which is a fine and good thing to complain about) but the way you’ve phrased that objection here is a general counterargument against pretty much any statement that doesn’t fall within mathematics, including all heuristics, priors, educated guesses, and parsimony intself.
(And the literal meaning of “I know nothing about this but here’s my pontification” is very similar to “I have no evidence, but here is my prior assumption”. You’re just rewording it so it’s a low status thing.)
Might be helpful to narrow down the objection a little, to explain where precisely you feel people are commonly overreaching?
A prior is a statement of one’s knowledge (or to say exactly the same thing with an antonym, a statement of one’s ignorance), as expressed before performing an experiment or observation. It stands in contrast to one’s posterior, the state of belief after having updated on the evidence obtained. Outside of that context, one’s beliefs are not prior to anything, and talking about one’s priors is just, well, rewording it so it sounds like a high status thing.
But on reconsideration, I think I’m being unfair in making that response to your post. In the flying example you are talking about things that have been observed that as it happens confirm the stated prior. It’s just a thought about the casual use of the word “prior” that has been on my mind for a while.
The way that I’ve phrased this outside of lesswrong (where people don’t typically know what priors are) is: “In the absence of empirical data, things which are evolutionarily novel should be treated as guilty until evidence proves them innocent, whereas things which are evolutionarily familiar should be treated as innocent until evidence proves them guilty.”
“Prior” captures the connotation that this is only a provisionary belief until more evidence surfaces in one neat word.
The former is an empirical claim of a strong pattern which, if true, requires explanation. The latter is a hypothesis that explains it, makes falsifiable predictions, and is useful if true.
Are you saying the specific hypothesis is problematic, or that the whole logical structure is?
To prove the second statement, we just need to find gene variants that are strongly correlated with religious beliefs. ETA: and manipulate them experimentally to determine the direction of causality, or observe the effect of natural mutations.
No, the second statement hinges on a causal claim, so correlations alone can’t prove it unless supplemented with strong causal assumptions. Gene variants being correlated with religious beliefs is consistent with three different causal hypotheses: (1) gene variants influence religious belief, (2) religious belief influences gene variants, and/or (3) gene variants and religious belief have some common cause. Correlations only tell us that at least one of the hypotheses is true; they don’t allow us to conclude that hypothesis 1 is correct.
ETA: [does Fonzie thumbs-up] aaaayyyy!
It don’t know why this got downvoted, as it is completely correct.
As a practical example, consider the correlation between intelligence and Ashkenazic ancestry, and how that arose, with respect to those three alternatives.
| Are you saying the specific hypothesis is problematic, or that the whole logical structure is?
Both the hypothesis and the logical structure are appropriate. What is not appropriate is presenting weak hypotheses as explanations without identifying them as weak and without giving alternative hypotheses.
To exaggerate just slightly, you might compare the use of these explanations to the use of government conspiracies as explanations for major political events. It is easy to come up with explanations that assume conspiracy, and it is obviously true that the government is hiding information from us in some cases, but without strong evidence that we do not currently have, tales of the Illuminati are only amusing, not productive. Likewise, explanations based on human evolution are very easy to construct, and it is obvious that we evolved…
| To prove the second statement, we just need to find gene variants that are strongly correlated with religious beliefs.
This is a bit off topic, but interesting! So… That’s not quite true. We might, for example, find that genes that are correlated with imagination, creativity, or schizophrenia are also correlated with religious beliefs. But that doesn’t mean that either these genes or these traits evolved ‘for religion’ in any meaningful sense… any more than we would use that sort of rhetoric to prove that ‘humans evolved specifically for the purpose of experiencing schizophrenia’. We are muddying teleology here just a bit, but in many cases that is exactly the purpose of these arguments.
AKA: https://en.wikipedia.org/wiki/Genetic_fallacy
Serving as a counterexample is a lot weaker than being a counterexample.
The second statement is not merely hard to prove, it is obviously wrong. Evolution does not have a goal. Things do not evolve because they are going to be useful. They evolve because each incremental change was an improvement, measured by reproductive success compared with the absence of the change.
Yes, but it would be fair, for example, to say that ‘eyes evolved for seeing’. This is fair because for the last few billion years, that is indeed what they were being optimized for. This get more abstract when talking about things like religion, and much more dubious when you are talking about a period of perhaps a million years, but it is not quite to the point that I would call it ‘obviously wrong’.
I would agree that the statement ‘specifically with the intent of experiencing religion’ would be wrong; if you hold ‘purpose’ to mean ‘intent’, I have no objection to changing my language—perhaps ‘primary function’?
It would be fairer to say that eyes evolved by seeing.
“Purpose” and “intent” are synonyms.
Eyes have evolved into identifiable, specialised organs. One can reasonably say that their primary function is to see. I find it implausible that any part of the brain has religion as its primary function. Sight, even when merely a sensitivity to general illumination level, is of obvious use to any organism living in the light. Can the same be said of religion? Or is a tendency to personalise the forces of nature merely an epiphenomenon of some other useful mechanism?
Anyway, I’m still agreeing with your original point that one cannot strongly argue from the ubiquity of religion throughout human history and geography to its necessity as part of a healthy lifestyle.
I think we are generally in agreement, and have reached the same conclusions. However, if you are curious as to why I used this as an example, Google ‘god spot’. Depending what words you add to your search, you can see anything from confused science writers to creationists making all sorts of fun claims.
Biology isn’t mathematics, so what do you mean by “prove”? I’m getting an “evolution is just a theory” vibe here.
It is not at all hard to prove the first statement in that paragraph—although if you would like to use a word other than prove, I am okay with that. I certainly don’t mean to suggest that evolution is ‘just a theory’, but rather to point out that just because we have an effective law of nature that explains large amounts of the world, we cannot pretend that it will explain everything that we are interested in talking about.
To be clear, I am not saying that something like ‘spirituality’ or ‘humanity’ is somehow apart from the physical laws or observed processes of nature. I am saying that when we talk about the effects of human nature on X, we need to remember that you do indeed need to consider what is the scientific evidence for ‘human nature’, and what is conjecture. Most people do not use ‘human nature’ and related terms in a way that is meaningful, even in science writing.
Do you know of a clean criterion for deciding when you’re dealing with mere conjecture and when you’re dealing with evidence?
Most evidence tends to be pretty clear even in the field of anthropology. You can publish a speculative theory, but more often a paper is going to say “we found these bones with these markers here, {type of dating} indicates age of X with a margin of error of Y”; “technology X was found at Y at a depth of Z, this matches/does not match technology A in aspects B and C, but not P, Q, etc.” Ethnologies are a bit more suspect, but you can check who visited when and observed what, and see if the observations are consistent. And as you might imagine, genetic studies tend to be fairly clear cut.
When you make a more general statement about ‘human nature’, you start to move into frequency counts of observed societies, which mean that your sampling frame is very limited, and much more likely to give you exceptions than rules. Much of what you see in informal writing is broadly extrapolated from comparison with animals and broad assumptions both about the environment and about humans (assumed lack of) ability to adapt without genetic change.
As a shorthand, as in most fields, if a claim is made and a peer reviewed paper is not cited, assume that this is not the proper source for this information.
Seems that this is key. The question is what kind of sampling is broad enough to support what kind of assertion. I’m not sure if that can always be neatly determined, so you might have two sets of claims on a continuum between well-supported and totally speculative with a muddy stretch in the middle.
Yes—and if authors gave an indication of what sort of evidence they were looking at, it would not be a fallacy. It is fine to report that ‘5/5 of the X that we looked at are Y’, but the claim that ‘X are Y’ is not so fine. Most educated people (for example, science writers) seem to understand this for most cases, but drop their critical thinking when it comes to humans…
I never said you did. I said you were committing the same fallacy as the people who do.
If you have a specific fallacy in mind, name it (or describe it). However, it is not a fallacy to say that one statement is better backed by evidence than another—you need to specify what evidence I am not being clear about, or any logical connections I have missed.
“The evidence isn’t 100% conclusive, therefore we should adopt a position of complete ignorance.”
I read the post as suggesting that we adopt a position of marginally increased skepticism relative to what prevails in common discussion.
My intended claim was that we should be more aware of the evidence presented. I also do believe that when we are aware of the available evidence, we will come to disregard most of the references we see to the the content and causes of human nature, but this will depend on how you weigh the evidence.
However, if it will make the matter clearer, I can give you an example of evidence-based claims in the field of anthropology.
Here’s one that I hope isn’t too politically charged, but is still interesting: According to the Ethnographic Atlas Codebook derived from George Peter Murdock’s Ethnographic Atlas (1981), which recorded the marital composition of 1231 societies, from 1960-1980, 186 societies were monogamous, 453 had occasional polygyny, 588 had more frequent polygyny, and seven had polyandry.
Claims that could be made based on this include:
Approximately 85% of human cultures accept some form of polygyny.
Most human societies are polygamous.
Humans evolved to be polygamous.
Humans should be polygamous.
Polygamy gives humans a better chance of survival.
Primitive cultures are more likely to be polygamous.
Anthropologists are perverts.
Obviously, some of these claims you would not make unless you had a specific ax you wanted to grind. Some of these claims wouldn’t be supported by the data. Some of these claims would depend on how you, personally, tend to weigh evidence (particularly the normative ones). Even the first claim, the one most supported by the evidence, is questionable if you disagree on the terms (e.g., what constitutes as a distinct culture?). But this is the sort of thing that we (by which I mean pretty much all of us) are used to dealing with in the soft sciences—we know how to navigate these things. Unfortunately, people are much less willing to navigate fuzzy data accurately, and often much less motivated to navigate human data honestly.