I’m curious about how others here process study results, specifically in psychology and the social sciences.
The (p < 0.05) threshold for statistical significance is, of course, completely arbitrary. So when I get to the end of a paper and the result that came in at, for example, (p < 0.1) is described as “a non-significant trend favoring A over B,” part of me wants to just go a head and update just a little bit, treating it as weak evidence, but I obviously don’t want to do even that if there isn’t a real effect and the evidence is unreliable.
I’ve found that study authors are often inconsistent with this—they’ll “follow the rules” and report no “main effect” detected when walking you through the results, but turn around and argue for the presence of a real effect in the discussion/analysis based on non-individually-significant trends in the data.
The question of how to update is further compounded by (1) the general irreproducibility of these kinds of studies, which may indicate the need to apply some kind of global discount factor to the weight of any such study, and (2) the general difficulty of properly making micro-adjustments to belief models as a human.
This is exactly the situation where heuristics are useful, but I don’t have a good one. What heuristics do you all use for interpreting results of studies in the social sciences? Do you have a cutoff p-value (or a method of generating one for a situation) above which you just ignore a result outright? Do you have some other way of updating your beliefs about the subject matter? If so, what is it?
Thank you! This is exactly what I was looking for. Thinking in terms of bits of information is still not quite intuitive to me, but it seems the right way to go. I’ve been away from LW for quite a while and I forgot how nice it is to get answers like this to questions.