There’s been some recent discussion about the value of answering your questions by reading the established scholarly literature in a field. On the one hand, if you never read other researchers’ work, you’ll be trying to invent the wheel, and you may well get things very wrong. The best way to learn physics is to pick up a physics textbook, not to try to deduce the laws of nature on your own. On the other hand, sometimes the leading scientists are wrong. Or sometimes you have a question that has never been studied scientifically. It’s not always enough just to look at what the experts say; sometimes you need a little independent thought, or even the “DIY science” approach of looking at the data yourself and drawing your own conclusions. Ideally, these two approaches aren’t really rivals, they’re complementary. “DIY science” gives you insights into how to conduct scholarship—what resources to seek out, and what claims to trust.
Looking up other people’s research is scholarship, not science. Scholarship isn’t bad. In most fields scholarship is useful, and in technical fields it’s a prerequisite to doing science. But scholarship has its drawbacks. If there is no high-quality body of scientific research on a question (say, “How do people get very rich?”) then looking up “expert” opinion isn’t especially useful. And if there is a body of scientific research, but you have reason to suspect that the scientific community isn’t well-informed or evenhanded, then you can’t just rely on “experts.” But you don’t want to be mindlessly contrarian either. You need some independent way to evaluate whom to believe.
Take a topic like global warming. Is the scientific literature accurate? Well, I don’t know. To know the answer, I’d have to know more about geophysics myself, be able to assess the data myself, and compare my “DIY science” to the experts and see if they match. Or, I’d have to know something about the trustworthiness of peer-reviewed scientific studies in general—how likely they are to be true or false—and use that data to inform how much I trust climate scientists. Either way, to have good evidence to believe or not believe scientists, I’d need data of my own.
The phrase “DIY science” makes it sound like there’s some virtue in going it alone. All alone, no help from the establishment. I don’t think there’s any virtue in that. Help is useful! And after all, even if you tabulate your own data, it’s often data that someone else gathered. (For example, you could learn how people become billionaires by compiling stats on Fortune 500 lists.) This isn’t My Side of the Mountain science. It’s not idealizing isolation. The “do it yourself” step is the step where you draw your own conclusion directly from data. You don’t have to neglect scholarship—you just have to think for yourself at some point in the process.
The trouble with doing all scholarship but no science is that you have no way to assess the validity of what you read. You can use informal measures (prestige? number of voices in agreement? most cited? most upvotes?) but how do you know if those informal measures correlate with the truth of an argument? Eventually, at some point you have to look at some kind of data and draw your own conclusion from it. Critically assessing scientific literature eventually requires you to do some DIY science. Here, let’s look at the data section. Do the paper’s conclusions match their actual data? Here, let’s look at this discipline’s past record at predicting future events. Does it have a good track record? Here, let’s look at these expert recommendations. How often are they put into practice in the real world, and with what success?
One way of uniting scholarship and DIY science is the category of metastudies—how often is such-and-such class of experts or publications correct?
Another thing you can do to unite scholarship and DIY science is looking for patterns, commonalities and disagreements in existing literature. Thanks to modern large databases, you can sometimes do this statistically.
IHOP searches the PubMed literature by genes or proteins.
Google ngrams allow you to search word prevalence to get a rough idea of trends in writing over time.
The digital humanities are a new field that involves running somewhat more subtle statistics on historical and scholarly documents.
EDIT DUE TO MORENDIL: it’s possible, using graph theory alone, to notice misleading information in a scientific subfield by looking at the pattern of citations. Looking at the citation graph, it’s possible to observe bias (a systematic tendency to under-cite contradictory evidence), amplification (the prevalence of a belief in the scientific community becomes stronger due to new links to a few influential and highly-linked papers, even though no new data is being presented), and invention (the propagation of claims that are not actually backed by data anywhere.) Negative results (the lack of conclusive results) systematically fail to propagate through scholarly networks.
We’d need more sophisticated data analysis than now exists, but one of my dreams is that one day we could develop tools that search the existing literature on a search term, say, “Slavery caused the American Civil War,” and allowed you to estimate how contentious that claim was, how many sources were for and against it, whatthe sources’ citation rates and links to other phrases tell you about who holds what opinions, and allowed you to somewhat automate the process of reading and making sense of what other people wrote.
An even more ambitious project: making a graph of which studies invalidate or cast doubt on which other studies, on a very big scale, so you could roughly pinpoint the most certain or established areas of science. This would require some kind of systematic method of deducing implication, though.
This can get more elaborate than you might prefer, but the point is that if you really want to know how valid a particular idea you’ve read is, there are quantitative ways to get closer to answering that question.
The simplest check, of course, is the sanity test: run some simple figures to see if the “expert” view is even roughly correct. Does past stock performance actually predict future stock performance? Well, the data’s out there. You can check. (N.B. I haven’t done this. But I would if I were investing.) Is cryonics worth the money? Come up with some reasonable figures for probability and discount rates and do a net present value calculation. (I have done this one.) Or consider blatantly unscientific data-gathering: a survey on when and how authors get published. Online polls and other informal data-gathering are bad methodology, but way better than nothing, and often better than intuitive “advice.” If the expert “consensus” is failing your sanity tests, either you’re making a mistake or they are.
A recurring question is how and when to be contrarian—when to reject the mainstream expert judgments. One obvious example is when you suspect experts are biased to protect interests other than truth. But experts can be biased while still being right. (For example, biologists certainly have systematic biases, but that doesn’t mean their embrace of the theory of evolution is the result of irrational bias.) One way or another, you’re stuck with a hard epistemic problem when evaluating claims. You can test them against your own data -- but then you have to decide how much you trust your own back-of-the envelope computations or informal data collection. You can test them against the “consensus” or bulk of the literature—but then you have to decide whether you trust the consensus. You can test them against the track record of the field—but then you have to decide whether you trust the very meta-analysis you’re using. There’s no single magic bullet. But it’s probably worth it, if you’re seriously curious about the truth of a claim, to try a few of these different approaches.
Scholarship and DIY Science
Related: Science: do it yourself, Some Heuristics for Evaluating The Soundness of the Academic Mainstream in Unfamiliar Fields, The Neglected Virtue of Scholarship
There’s been some recent discussion about the value of answering your questions by reading the established scholarly literature in a field. On the one hand, if you never read other researchers’ work, you’ll be trying to invent the wheel, and you may well get things very wrong. The best way to learn physics is to pick up a physics textbook, not to try to deduce the laws of nature on your own. On the other hand, sometimes the leading scientists are wrong. Or sometimes you have a question that has never been studied scientifically. It’s not always enough just to look at what the experts say; sometimes you need a little independent thought, or even the “DIY science” approach of looking at the data yourself and drawing your own conclusions. Ideally, these two approaches aren’t really rivals, they’re complementary. “DIY science” gives you insights into how to conduct scholarship—what resources to seek out, and what claims to trust.
Looking up other people’s research is scholarship, not science. Scholarship isn’t bad. In most fields scholarship is useful, and in technical fields it’s a prerequisite to doing science. But scholarship has its drawbacks. If there is no high-quality body of scientific research on a question (say, “How do people get very rich?”) then looking up “expert” opinion isn’t especially useful. And if there is a body of scientific research, but you have reason to suspect that the scientific community isn’t well-informed or evenhanded, then you can’t just rely on “experts.” But you don’t want to be mindlessly contrarian either. You need some independent way to evaluate whom to believe.
Take a topic like global warming. Is the scientific literature accurate? Well, I don’t know. To know the answer, I’d have to know more about geophysics myself, be able to assess the data myself, and compare my “DIY science” to the experts and see if they match. Or, I’d have to know something about the trustworthiness of peer-reviewed scientific studies in general—how likely they are to be true or false—and use that data to inform how much I trust climate scientists. Either way, to have good evidence to believe or not believe scientists, I’d need data of my own.
The phrase “DIY science” makes it sound like there’s some virtue in going it alone. All alone, no help from the establishment. I don’t think there’s any virtue in that. Help is useful! And after all, even if you tabulate your own data, it’s often data that someone else gathered. (For example, you could learn how people become billionaires by compiling stats on Fortune 500 lists.) This isn’t My Side of the Mountain science. It’s not idealizing isolation. The “do it yourself” step is the step where you draw your own conclusion directly from data. You don’t have to neglect scholarship—you just have to think for yourself at some point in the process.
The trouble with doing all scholarship but no science is that you have no way to assess the validity of what you read. You can use informal measures (prestige? number of voices in agreement? most cited? most upvotes?) but how do you know if those informal measures correlate with the truth of an argument? Eventually, at some point you have to look at some kind of data and draw your own conclusion from it. Critically assessing scientific literature eventually requires you to do some DIY science. Here, let’s look at the data section. Do the paper’s conclusions match their actual data? Here, let’s look at this discipline’s past record at predicting future events. Does it have a good track record? Here, let’s look at these expert recommendations. How often are they put into practice in the real world, and with what success?
One way of uniting scholarship and DIY science is the category of metastudies—how often is such-and-such class of experts or publications correct?
Political pundits’ predictions are hardly better than random, and educational credentials don’t help accuracy.
As much as 90% of the published medical information that doctors rely upon may be wrong.
To get really meta: most CDC meta-analyses are methodologically flawed.
Another thing you can do to unite scholarship and DIY science is looking for patterns, commonalities and disagreements in existing literature. Thanks to modern large databases, you can sometimes do this statistically.
IHOP searches the PubMed literature by genes or proteins.
Google ngrams allow you to search word prevalence to get a rough idea of trends in writing over time.
The digital humanities are a new field that involves running somewhat more subtle statistics on historical and scholarly documents.
If you want to attach a number to “scientific consensus,” look at the Science Citation Index. Many of these tools are restricted to universities, but you can, for example, measure how highly cited a journal or an author is.
EDIT DUE TO MORENDIL: it’s possible, using graph theory alone, to notice misleading information in a scientific subfield by looking at the pattern of citations. Looking at the citation graph, it’s possible to observe bias (a systematic tendency to under-cite contradictory evidence), amplification (the prevalence of a belief in the scientific community becomes stronger due to new links to a few influential and highly-linked papers, even though no new data is being presented), and invention (the propagation of claims that are not actually backed by data anywhere.) Negative results (the lack of conclusive results) systematically fail to propagate through scholarly networks.
We’d need more sophisticated data analysis than now exists, but one of my dreams is that one day we could develop tools that search the existing literature on a search term, say, “Slavery caused the American Civil War,” and allowed you to estimate how contentious that claim was, how many sources were for and against it, what the sources’ citation rates and links to other phrases tell you about who holds what opinions, and allowed you to somewhat automate the process of reading and making sense of what other people wrote.
An even more ambitious project: making a graph of which studies invalidate or cast doubt on which other studies, on a very big scale, so you could roughly pinpoint the most certain or established areas of science. This would require some kind of systematic method of deducing implication, though.
This can get more elaborate than you might prefer, but the point is that if you really want to know how valid a particular idea you’ve read is, there are quantitative ways to get closer to answering that question.
The simplest check, of course, is the sanity test: run some simple figures to see if the “expert” view is even roughly correct. Does past stock performance actually predict future stock performance? Well, the data’s out there. You can check. (N.B. I haven’t done this. But I would if I were investing.) Is cryonics worth the money? Come up with some reasonable figures for probability and discount rates and do a net present value calculation. (I have done this one.) Or consider blatantly unscientific data-gathering: a survey on when and how authors get published. Online polls and other informal data-gathering are bad methodology, but way better than nothing, and often better than intuitive “advice.” If the expert “consensus” is failing your sanity tests, either you’re making a mistake or they are.
A recurring question is how and when to be contrarian—when to reject the mainstream expert judgments. One obvious example is when you suspect experts are biased to protect interests other than truth. But experts can be biased while still being right. (For example, biologists certainly have systematic biases, but that doesn’t mean their embrace of the theory of evolution is the result of irrational bias.) One way or another, you’re stuck with a hard epistemic problem when evaluating claims. You can test them against your own data -- but then you have to decide how much you trust your own back-of-the envelope computations or informal data collection. You can test them against the “consensus” or bulk of the literature—but then you have to decide whether you trust the consensus. You can test them against the track record of the field—but then you have to decide whether you trust the very meta-analysis you’re using. There’s no single magic bullet. But it’s probably worth it, if you’re seriously curious about the truth of a claim, to try a few of these different approaches.