One tactic I like to use is “how do they know this?”, and asking myself or investigating if it’s possible for their answer to demonstrate the thing they’re claiming.
A lot of work doesn’t tell you. Those aren’t necessarily wrong, because they might have a good answer they’re not incentivized to share, but at a minimum it’s going to make it hard to learn from the work.
A lot of work claims to tell you, but when you look they are lying. For example, when I investigated the claim humans could do 4 hours of thought-work per day, I looked up the paper’s citations, and found they referred to experiments of busy work. Even if those studies were valid, they couldn’t possibly prove anything about thought-work. I consider “pretending to have sources and reasons” a worse sin than “not giving a source or reason”
More ambiguously, I spent a lot of time trying to figure out how much we could tell and at what resolution from ice core data. I still don’t have a great answer on this for the time period I was interested in. But I learned enough to know that the amount of certainty the book I was reading (The Fate of Rome) was presenting data as more clear cut than it was.
On the other end, The Fall of Rome spends a lot of time explaining why pottery is useful in establishing economic and especially trade status of an area/era. This was pretty hard to verify from external sources because it’s original research from the author, but it absolutely makes sense and produces a lot of claims and predictions that could be disproved. Moreover, none of the criticism I fond of Fall of Rome addressed his points on pottery- no one was saying “well I looked at Roman pottery and think the quality stayed constant through the 600s”.
I consider “pretending to have sources and reasons” a worse sin than “not giving a source or reason”
I notice that one of the things that tips me off that a scientist is good, is if her/his work demonstrates curiosity. Do they seem like they’re actually trying to figure out the answer? Do they think though and address counterarguments, or just try to obscure those counterargument?
This seems related: a person who puts no source might still be sharing their actual belief, but a person who puts a fake source seems like they’re trying to sound legitimate.
Yes, this seems like a good guideline, although I can’t immediately formalize how I detect curiosity. Vague list of things this made me think of:
I think this is a better guideline for books than scientific articles, which are heavily constrained by academic social and funding norms.
One good sign is if *I* feel curious in a concrete way when I read the book. What I mean by concrete is...
e.g. Fate of Rome had a ton of very specific claims about how climate worked and how historical climate conditions could be known. I spent a lot of time trying to verify these and even though I ultimately found them insufficiently supported, there was a concreteness that I still give positive marks for.
In contrast my most recently written epistemic spot check (not yet published), I spent a long time on several claims along the lines of “Pre-industrial Britain had a more favorable legal climate for entrepreneurship than continental Europe”. I don’t recall the author giving any specifics on what he meant by “more favorable”, nor how he determined it was true. Investigating felt like a slog because I wasn’t even sure what I was looking for.
I worry I’m being unfair here because maybe if I’d found lots of other useful sources I’d be rating the original book better. But when I investigated I found there wasn’t even a consensus on whether Britain had a strong or weak patent system.
Moralizing around conclusions tends to inhibit genuine curiosity in me, although it can loop around to spite curiosity (e.g., Carol Dweck).
One tactic I like to use is “how do they know this?”, and asking myself or investigating if it’s possible for their answer to demonstrate the thing they’re claiming.
A lot of work doesn’t tell you. Those aren’t necessarily wrong, because they might have a good answer they’re not incentivized to share, but at a minimum it’s going to make it hard to learn from the work.
A lot of work claims to tell you, but when you look they are lying. For example, when I investigated the claim humans could do 4 hours of thought-work per day, I looked up the paper’s citations, and found they referred to experiments of busy work. Even if those studies were valid, they couldn’t possibly prove anything about thought-work. I consider “pretending to have sources and reasons” a worse sin than “not giving a source or reason”
More ambiguously, I spent a lot of time trying to figure out how much we could tell and at what resolution from ice core data. I still don’t have a great answer on this for the time period I was interested in. But I learned enough to know that the amount of certainty the book I was reading (The Fate of Rome) was presenting data as more clear cut than it was.
On the other end, The Fall of Rome spends a lot of time explaining why pottery is useful in establishing economic and especially trade status of an area/era. This was pretty hard to verify from external sources because it’s original research from the author, but it absolutely makes sense and produces a lot of claims and predictions that could be disproved. Moreover, none of the criticism I fond of Fall of Rome addressed his points on pottery- no one was saying “well I looked at Roman pottery and think the quality stayed constant through the 600s”.
Thanks.
This point in particular sticks with me:
I notice that one of the things that tips me off that a scientist is good, is if her/his work demonstrates curiosity. Do they seem like they’re actually trying to figure out the answer? Do they think though and address counterarguments, or just try to obscure those counterargument?
This seems related: a person who puts no source might still be sharing their actual belief, but a person who puts a fake source seems like they’re trying to sound legitimate.
Yes, this seems like a good guideline, although I can’t immediately formalize how I detect curiosity. Vague list of things this made me think of:
I think this is a better guideline for books than scientific articles, which are heavily constrained by academic social and funding norms.
One good sign is if *I* feel curious in a concrete way when I read the book. What I mean by concrete is...
e.g. Fate of Rome had a ton of very specific claims about how climate worked and how historical climate conditions could be known. I spent a lot of time trying to verify these and even though I ultimately found them insufficiently supported, there was a concreteness that I still give positive marks for.
In contrast my most recently written epistemic spot check (not yet published), I spent a long time on several claims along the lines of “Pre-industrial Britain had a more favorable legal climate for entrepreneurship than continental Europe”. I don’t recall the author giving any specifics on what he meant by “more favorable”, nor how he determined it was true. Investigating felt like a slog because I wasn’t even sure what I was looking for.
I worry I’m being unfair here because maybe if I’d found lots of other useful sources I’d be rating the original book better. But when I investigated I found there wasn’t even a consensus on whether Britain had a strong or weak patent system.
Moralizing around conclusions tends to inhibit genuine curiosity in me, although it can loop around to spite curiosity (e.g., Carol Dweck).