Divergence on Evidence Due to Differing Priors—A Political Case Study
(This uses a politically charged topic as an example, but I’m hoping that people are willing to try to understand the points made despite that. Politics is Hard Mode, and I’m hoping to stay at a lower difficulty level for now, so I’ve asked that comments not discuss the object level politics.)
Last week on twitter, I saw two very different takes on how the United States reacted to 9/11, and the consequences. They both reflected an update to people’s views based on the data since that time, but the conclusions radically diverged. E.T. Jaynes posited that this can happen, but this is the first time I recognized it in practice so clearly, and thought it was worth noting simply as an example. Beyond that, I wanted to point out how it can be to some extent avoided.
The first was Don Moynihan, who said: “It is important to remember 9/11 and the lives that were lost. Its also important to remember that period of American history fully, to understand that a terrorist attack triggered a series of catastrophic judgments by US politicians that led to the loss of more innocent lives.”
The second was David French, who said: ” If you had told us on that day that we wouldn’t endure another mass-scale attack on American soil for at least 18 more years, we would have thought you were wildly optimistic. The achievement of our military and security establishment should never be underestimated.”
First, I want to note that the two views are based on different counterfactuals. Moynihan presumably assumes that the counterfactual rate of terrorist attacks had the US not gone to war in Afghanistan and Iraq to be at least relatively low. He therefore updates based on the fact that there have been very few credible attempts to mount large attacks on the US homeland, to conclude that they would be foiled by standard US intelligence sources and policing. French explicitly calls out the fact that the commonly held prior for the number of attacks that would be mounted in the wake of 9/11 was high, and asserts that this was correct but-for the military interventions the US waged. He therefore updates based on the fact that there have been very few credible attempts to mount large attacks on the US homeland, to conclude that the military interventions were successful.
Clearly, it is not the case that either person is ignoring the evidence. In this case, there are different reasons to update towards each of the models; the lack of credible attack attempts in the US contrasts with the large number in Iraq, and it’s plausible that without the US wars abroad, some of that effort would have been directed at the US. On the other hand, law enforcement was very successful in detecting and stopping attacks, so it’s plausible that few would have gotten through anyways. But since we can’t see what would have happened has the US not gone to war (i.e. counterfactual realities are unobservable), we may be tempted to conclude that evidence is useless in the face of different prior beliefs. This isn’t quite true.
If Moynihan and French had been asked in detail in 2001 what they expected in the case that the US would or would not go to war, they would be forced to confront the ways in which their predictions failed. Perhaps their conclusions would be different—but most people don’t routinely make quantifiable predictions. Their stated models are at best capable of being twisted, and if people want to believe their model, and not change their mind, not only can the invisible dragon in the garage be post-hoc determined to be permeable to flour once an annoying rationalist proposes a test of the theory, but given the flexibility that language offers, people often specify models of the world that don’t require post-hoc adjustment, just defensible clarifications. So unless we’re incredibly detailed in the predictions we request of people, the ability to use data to reinforce rather than revise beliefs can’t be stopped.
Ideally, we’d have the ability to build a correct model, but we can’t—certainly not in the space of this post, near-certainly not in a couple years of research into international relations theory, and plausibly not at all due to the paucity of evidence and the number of uncertain variables involved.
The better approach, I think, is to consider the outside view about the models. We have are two different models that are espoused by people with differing political viewpoints. Each of the models reflect a combination of motivated reasoning, selective blindness, and actual attempts to understand the world, and we’re stuck uncertain which is less wrong.
But what we absolutely shouldn’t do—and without explicitly trying not to, likely would do—is notice the model that we’d prefer and (perhaps subconsciously) preferentially interpret evidence as supporting it and disproving the alternatives. Especially here, where both models are simplified and wrong in many ways, my advice is to try to reason under model-uncertainty, instead of trying to reason the way we are naturally inclined to, by picking sides in a fight. Absent further plausible arguments and evidence—which exist, but themselves need to be evaluated very carefully for the same reasons—we should look at the models as both plausible.
It’s probably worth noting that the two tweets don’t actually contradict each other.
They don’t contradict directly, but they reflect nearly incompatible updates to their world-models based on the same data.
I don’t have any object level comments on this post (seems well reasoned to me and to make a reasonable point about the minds and the reasoning of the two people and their comments considered), but to me this looks like the kind of discussion of politics that is appropriate for LW (and since it’s on the Frontpage I take it the mods agree): considers politics as a ground in which to analyze a phenomena where people are motivated in ways that can make distinctions sharp such that we can easily analyze them while avoiding saying anything about the object-level political conclusions.