[SEQ RERUN] Hindsight Devalues Science
Title: [SEQ RERUN] Hindsight Devalues Science Tags: sequence_reruns Today’s post, Hindsight Devalues Science was originally published on 17 August 2007. A summary (taken from the LW wiki):
Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world. This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn’t fit what we really would have expected. We need to make a conscious effort to be shocked enough.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Hindsight bias, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
This one is really important, for a reason that wasn’t spelled out in the original article — hindsight bias makes people think folk wisdom is more valid than it really is, and thereby opens the door to all kinds of superstitious belief. If people interpret scientific evidence as confirming ‘common-sense’ or handed-down knowledge (because they select the ‘common-sense’ belief that turned out to be true after seeing the data, rather than having to select one from the morass beforehand), then they’re likely to increase their credence in other knowledge of that type. You see this all the time when people say things like “science is just now finding proof for medicines that indigenous societies have been using for thousands of years — so here, try this snake oil!”
Eliezer never said if he reversed the results a second time.
1 I found somewhat predictable
2 I found obvious
3 I didn’t know one way or the other
4 I found moderately unpredictable
5 I found moderately unpredictable
Overall, I’m completely unsure whether or not they’re reversed. Well, I suspect they’re only reversed once, but not on account of the data itself.
I was pretty certain about that economic one later on, but I’m good at economics. If I was wrong on that one, I would have looked for more data to back that up, and if I was indeed incorrect, it would make a huge impact on my political and economic opinions.
That vagueness is deliberate. The point is that Eliezer didn’t tell you inside the post which option was correct, because hindsight would then take over and make the results seem predictable. The idea is that then the reader is left with the uncertainty, and the problem of deciding what was the actual true answer. I went through each proposition individually and attempted to determine whether or not that proposition was true. I was correct four out of six times, which is probably a reasonable score.
I think readers should attempt to determine, on their own, their advance predictions (preferably written down, although I admit to skipping that step), before looking up the results of the actual study. The correct results are available, but I would recommend that the true results not be discussed or linked to on less wrong.
The obvious thing to do would be to choose the reversal-parity randomly (by flipping a coin), independently for each claim.