While reading primary science literature, I’ve had the following experiences happen to me on multiple occasions.
1) Read a paper with a surprising result. Later discover it has critical flaws or didn’t pass replication. I’ve learned to increase skepticism with increasingly surprising results. “This study is just wrong because of statistical issues or bad reporting” is now always one of the hypotheses in my mental arsenal, and I’ve found myself getting a bit better at predicting which results are just wrong using largely the heuristic of “this is too surprising to believe”
2) Form a hypothesis while reading. It gets verified (or falsified) via something you read later. Also, since one typically reads the methods before the results, one gets a lot of practice predicting results. (I don’t formally make predictions but I find myself making them automatically as I read.)
Based on these experiences, I suggest that reading primary scientific literature is a good exercise in “alive” epistemic rationality training. The only drawback is that it takes a long time to get sufficiently acquainted with a field.
While I don’t read scientific literature that much, I do make formal predictions pretty often. Typically any time I notice something I’m interested in that will be easy to check in the future.
Will I get to bed on time today? Will I be early for the meeting tomorrow? Etc.
I second the anecdotal evidence that this is a “live” exercise. Sidenote: it took me way too long to realize I needed to write all my predictions down. I spent a few weeks thinking I was completely excellent at predicting things.
Here’s a prior that served me well for reading empirical literature:
1.) There is no effect (the null is true).
2.) If there is an effect but the data is observational, be very careful of any causal claims (they are most likely either due to modeling issues, bias due to confounding they missed, or getting the causal analysis wrong, or [a thousand more things]).
3.) If there is an effect and it is causal, I probably already heard about it, and there are lots of papers establishing it. Give the publication rate, and my reading rate, the chances of me stumbling on a genuinely new empirical result being reported for the first time is quite low.
4.) Conditional on me reading a paper, it’s either related to what I do, or the authors are “good at the media,” or (very rarely) it’s actually a breakthrough!
5.) Most papers are crap, most wrong findings are not retracted (incentives).
While reading primary science literature, I’ve had the following experiences happen to me on multiple occasions.
1) Read a paper with a surprising result. Later discover it has critical flaws or didn’t pass replication. I’ve learned to increase skepticism with increasingly surprising results. “This study is just wrong because of statistical issues or bad reporting” is now always one of the hypotheses in my mental arsenal, and I’ve found myself getting a bit better at predicting which results are just wrong using largely the heuristic of “this is too surprising to believe”
2) Form a hypothesis while reading. It gets verified (or falsified) via something you read later. Also, since one typically reads the methods before the results, one gets a lot of practice predicting results. (I don’t formally make predictions but I find myself making them automatically as I read.)
Based on these experiences, I suggest that reading primary scientific literature is a good exercise in “alive” epistemic rationality training. The only drawback is that it takes a long time to get sufficiently acquainted with a field.
While I don’t read scientific literature that much, I do make formal predictions pretty often. Typically any time I notice something I’m interested in that will be easy to check in the future.
Will I get to bed on time today? Will I be early for the meeting tomorrow? Etc.
I second the anecdotal evidence that this is a “live” exercise. Sidenote: it took me way too long to realize I needed to write all my predictions down. I spent a few weeks thinking I was completely excellent at predicting things.
Here’s a prior that served me well for reading empirical literature:
1.) There is no effect (the null is true).
2.) If there is an effect but the data is observational, be very careful of any causal claims (they are most likely either due to modeling issues, bias due to confounding they missed, or getting the causal analysis wrong, or [a thousand more things]).
3.) If there is an effect and it is causal, I probably already heard about it, and there are lots of papers establishing it. Give the publication rate, and my reading rate, the chances of me stumbling on a genuinely new empirical result being reported for the first time is quite low.
4.) Conditional on me reading a paper, it’s either related to what I do, or the authors are “good at the media,” or (very rarely) it’s actually a breakthrough!
5.) Most papers are crap, most wrong findings are not retracted (incentives).