It is a policy that doesn’t just exist in psychology. Some journals in other fields have similar policies requiring that the work include something more than just a replication of the study in question, but my impression is that this is much more common in the less rigorous areas like psychology. Journals probably do this because they want to be considered cutting edge and they get less of that if they publish replication attempts. Given that it makes some sense to reject both successful and unsuccessful replications, since if one only included unsuccessful replications then there would be a natural publication bias. So they more or less successfully fob the whole thing off on other journals. (There’s something like an n player’s prisoner dilemma here with journals as the players trying to decide if they accept replications in general.) So this is bad, but it is understandable when one remembers that journals are driven by selfish, status-driven humans, just like everything else in the world.
What rules of thumb do you use to ‘keep this in mind’? I generally try to never put anything in my brain that just has one or two studies behind it. I’ve been thinking of that more as ‘it’s easy to make a mistake in a study’ and ‘maybe this author has some bias that I am unaware of’, but perhaps this cuts in the opposite direction.
Actually, even with many studies and a meta-analysis, you can still get blindsided by publication bias. There are plenty of psi meta-analyses showing positive effects (with studies that were not pre-registered, and are probably very selected), and many more in medicine and elsewhere.
If it’s something I trust an idiot to make the right conclusion on with good data, I’ll look for meta-analyses, p<<0.05, or do a quick and dirty meta analysis myself if the number of studies is sufficiently small. If it’s something I’m surprised has even been tested, I’ll give one study more weight. If it’s something that I’d expect to be tested a lot, I’d give it less. If the data I’m looking for is orthogonal to the data they’re being published for, it probably doesn’t suffer from selection bias so I’ll take it at face value. If the studies result is ‘convenient’ in some way for the source that showed it to me, I’ll be more skeptical of selection bias and misinterpretation.
If it’s a topic where I see very easy to make methodological flaws or interpretation errors, then I’ll try to actually dig in and look for them and see if there’s a new obvious set of conclusions to draw.
Separately from determining how strong the evidence is, I’ll try to ‘put it in my brain’ if there’s only a study or two if it’s testing a hypothesis I already suspected of being true, or if it makes too much sense in hindsight (aka high priors), or put it in my brain with a ‘probably untrue but something to watch out for’ tag otherwise.
What about influencing high-status actors (e.g. prominent universities)? I don’t know what the main influence points are for an academic journal, and I don’t know what things it’s considered acceptable for a university to accept money for, but it seems common to endow a professorship or a (quasi-academic) program.
Probably this method would cost many millions of dollars, but it would be interesting to know the order of magnitude required.
It is a policy that doesn’t just exist in psychology. Some journals in other fields have similar policies requiring that the work include something more than just a replication of the study in question, but my impression is that this is much more common in the less rigorous areas like psychology. Journals probably do this because they want to be considered cutting edge and they get less of that if they publish replication attempts. Given that it makes some sense to reject both successful and unsuccessful replications, since if one only included unsuccessful replications then there would be a natural publication bias. So they more or less successfully fob the whole thing off on other journals. (There’s something like an n player’s prisoner dilemma here with journals as the players trying to decide if they accept replications in general.) So this is bad, but it is understandable when one remembers that journals are driven by selfish, status-driven humans, just like everything else in the world.
Yes, this is a standard incentives problem. But one to keep in mind when parsing the literature.
What rules of thumb do you use to ‘keep this in mind’? I generally try to never put anything in my brain that just has one or two studies behind it. I’ve been thinking of that more as ‘it’s easy to make a mistake in a study’ and ‘maybe this author has some bias that I am unaware of’, but perhaps this cuts in the opposite direction.
Actually, even with many studies and a meta-analysis, you can still get blindsided by publication bias. There are plenty of psi meta-analyses showing positive effects (with studies that were not pre-registered, and are probably very selected), and many more in medicine and elsewhere.
If it’s something I trust an idiot to make the right conclusion on with good data, I’ll look for meta-analyses, p<<0.05, or do a quick and dirty meta analysis myself if the number of studies is sufficiently small. If it’s something I’m surprised has even been tested, I’ll give one study more weight. If it’s something that I’d expect to be tested a lot, I’d give it less. If the data I’m looking for is orthogonal to the data they’re being published for, it probably doesn’t suffer from selection bias so I’ll take it at face value. If the studies result is ‘convenient’ in some way for the source that showed it to me, I’ll be more skeptical of selection bias and misinterpretation.
If it’s a topic where I see very easy to make methodological flaws or interpretation errors, then I’ll try to actually dig in and look for them and see if there’s a new obvious set of conclusions to draw.
Separately from determining how strong the evidence is, I’ll try to ‘put it in my brain’ if there’s only a study or two if it’s testing a hypothesis I already suspected of being true, or if it makes too much sense in hindsight (aka high priors), or put it in my brain with a ‘probably untrue but something to watch out for’ tag otherwise.
How much money do you think it would take to give replications a journal with status on par with the new-studies-only ones?
Or alternately, how much advocacy of what sort? Is there someone in particular to convince?
It’s not something you can simply buy with money. It’s about getting scientists to cite papers in the replications journal.
What about influencing high-status actors (e.g. prominent universities)? I don’t know what the main influence points are for an academic journal, and I don’t know what things it’s considered acceptable for a university to accept money for, but it seems common to endow a professorship or a (quasi-academic) program.
Probably this method would cost many millions of dollars, but it would be interesting to know the order of magnitude required.