Googling is the first step. Consider adding scholarly searches to your arsenal.
Related to: Scholarship: How to Do It Efficiently
There has been a slightly increased focus on the use of search engines lately. I agree that using Google is an important skill—in fact I believe that for years I have came across as significantly more knowledgeable than I actually am just by quickly looking for information when I am asked something.
However, There are obviously some types of information which are more accessible by Google and some which are less accessible. For example distinct characteristics, specific dates of events etc. are easily googleable1 and you can expect to quickly find accurate information on the topic. On the other hand, if you want to find out more ambiguous things such as the effects of having more friends on weight or even something like the negative and positive effects of a substance—then googling might leave you with some contradicting results, inaccurate information or at the very least it will likely take you longer to get to the truth.
I have observed that in the latter case (when the topic is less ‘googleable’) most people, even those knowledgeable of search engines and ‘science’ will just stop searching for information after not finding anything on Google or even before2 unless they are actually willing to devote a lot of time to find it. This is where my recommendation comes—consider doing a scholarly search like the one provided by Google Scholar.
And, no, I am not suggesting that people should read a bunch of papers on every topic that they discuss. By using some simple heuristics we can easily gain a pretty good picture of the relevant information on a large variety of topics in a few minutes (or less in some cases). The heuristics are as follows:
1. Read only or mainly the abstracts. This is what saves you time but gives you a lot of information in return and this is the key to the most cost-effective way to quickly find information from a scholary search. Often you wouldn’t have immediate access to the paper anyway, however you can almost always read the abstract. And if you follow the other heuristics you will still be looking at relatively ‘accurate’ information most of the time. On the other hand, if you are looking for more information and have access to the full paper then the discussion+conclusion section are usually the second best thing to look at; and if you are unsure about the quality of the study, then you should also look at the method section to identify its limitations.3
2. Look at the number of citations for an article. The higher the better. Less than 10 citations in most cases means that you can find a better paper.
3. Look at the date of the paper. Often more recent = better. However, you can expect less citations for more recent articles and you need to adjust accordingly. For example if the article came out in 2013 but it has already been cited 5 times this is probably a good sign. For new articles the subheuristic that I use is to evaluate the ‘accuracy’ of the article by judging the author’s general credibilty instead—argument from authority.
4. Meta-analyses/Systematic Reviews are your friend. This is where you can get the most information in the least amount of time!
5. If you cannot find anything relevant fiddle with your search terms in whatever ways you can think of (you usually get better at this over time by learning what search terms give better results).
That’s the gist of it. By reading a few abstracts in a minute or two you can effectively search for information regarding our scientific knowledge on a subject with almost the same speed as searching for specific information on topics that I dubbed googleable. In my experience scholarly searches on pretty much anything can be really beneficial. Do you believe that drinking beer is bad but drinking wine is good? Search on Google Scholar! Do you think that it is a fact that social interaction is correlated with happiness? Google Scholar it! Sure, some things might seem obvious to you that X but it doesn’t hurt to search on google scholar for a minute just to be able to cite a decent study on the topic to those X disbelievers.
This post might not be useful to some people but it is my belief that scholarly searches are the next step of efficient information seeking after googling and that most LessWrongers are not utilizing this enough. Hell, I only recently started doing this actively and I still do not do it enough. Furthermore I fully agree with this comment by gwern:
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
A lot of people will be reluctant to start doing scholarly searches because they have barely done any or because they have never done it. I want to tell those people to still give it a try. Start by searching for something easy, maybe something that you already know from lesswrong or from somewhere else. Read a few abstracts, if you do not understand a given abstract try finding other papers on the topic—some authors adopt a more technical style of writing, others focus mainly on statistics, etc. but you should still be able to find some good information if you read multiple abstracts and identify the main points. If you cannot find anythinr relevant then move on and try another topic.
P.S. In my opinion, when you are comfortable enough to have scholarly searches as a part of your arsenal you will rarely have days when there is nothing to check for. If you are doing 1 scholarly search per month for example you are most probably not fully utilizing this skill.
1. By googleable I mean that the search terms are google friendly—you can relatively easily and quickly find relevant and accurate information.
2. If the people in question have developed a sense for what type of information is more accessible by google then they might not even try to google the less accessible-type things.
3. If you want to get a better and more accurate view on the topic in question you should read the full paper. The heuristic of mainly focusing on abstracts is cost-effective but it invariably results in a loss of information.
- Open Thread: how do you look for information? by 7 May 2013 17:22 UTC; 12 points) (
- 31 Jul 2013 10:19 UTC; 4 points) 's comment on Internet Research (with tangent on intelligence analysis and collapse) by (
As someone who actually does academic research and has spent countless hours reading the fine details of “prestigious” publications, 90% of the material out there is total garbage, and it is difficult to know if a paper is garbage just by reading the abstract. Peer review doesn’t help either because review boards are lazy and will never double check any of your actual footwork. They will never read your code, rerun the algorithms you claim you used, etc. A simple glitch can change the sign of your answer, but you typically stop looking for glitches in your 10,000 lines of code once the answer “looks right”.
So, if there is any controversy on the issue, remain agnostic. Like a 90⁄10 split in academia is totally reasonable once you factor in the intellectual biases/laziness/incompetence of researchers and publishers. All an article tells you is that somewhere out there, someone with an IQ >= 110 has an opinion they are publishing to further their career. I don’t place very much weight on that.
Don’t become one of these “reserch sez...” people who just regurgitate abstracts. You’ll wind up worried about the dangers of red meat, getting too much sunlight, doing 45 minutes of cardio every day, etc.
I am worried about getting too much sunlight. Apart from any increased risk of melanoma, It @#%@ hurts! Your skin goes red, painful and sensitive to the touch. A little later the skin peels off. If there was sufficient exposure bleeding is involved.
Also, as long as we’re picking on specific examples...
I’m confused by the inclusion of this. Is the jury not yet settled on the benefits of daily cardio?
As a test case, I tried applying this technique to the Dangers of Red Meat, which is apparently a risk factor for colorectal cancer. The abstracts of the first few papers claimed that it is a risk factor with the following qualifications:
if you have the wrong genotype (224 citations)
if the meat is well-done (178 citations)
if you have the wrong genotype, the meat is well done, and you smoke (161 citations)
only for one subtype of colorectal cancer (128 citations)
only for a different subtype not overlapping with the previous one (96 citations)
for all subtypes uniformly (100 citations)
no correlation at all (78 citations)
Correct me if I’m wrong, but most of those look like the result of fishing around for positive results, e.g. “We can’t find a significant result… unless we split people into a bunch of genotype buckets, in which case one of them gives a small enough p-value for this journal.” I haven’t read the studies in question so maybe I’m being unfair here, but still, it feels fishy.
You may be right. It’s not quite M&M colors, though; there was apparently some reason to believe this allele would have an effect on the relationship between red meat and cancer. If anything, you might claim that the fishing around is occurring at the meta level: the buckets are “genetics has an effect”, “the cancer’s location has an effect”, “how the meat is cooked has an effect”, and so on.
I believe at least part of the reason for this is that “the correlation between red meat and cancer is 0.56” or whatever is not an interesting paper anymore, so we add other variables like smoking to see what happens. (Much like “red meat causes cancer” is a more interesting paper than “1% of people have cancer”.) I’m not sure whether this is good or bad.
I punched in “red meat” to google scholar.
http://care.diabetesjournals.org/content/27/9/2108.short 197 citations—concluding that eating red meat “may” increase your risk of type II diabetes.
http://ajcn.nutrition.org/content/82/6/1169.short 173 citations—Shows more “correlations” and “associations” for the “beneficial effect of plant food intake and an adverse effect of meat intake on blood pressure.”
Seems accurate.
People who eat red meat tend to:
Do you understand why it’s not… entirely honest… to blame red meat? It shows up as a statistical correlate. It can be used to identify people at risk for these conditions, but then researchers make a leap and infer a causal relationship.
It’s an ideological punchline they can use to get published. And that’s all.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!? Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
I mean, come on.
Taking everything into account is difficult, especially when you have no idea exactly what you aught to be taking into account. Even if you manage to do that exactly right, there is still publication bias to deal with. And if you are using science for practical purposes, it’s even harder to make sure that the research is answering the right question in the first place, Sieben’s comments sound anti-science...but really they are frustration directed towards a real problem. There really is a lot of bad science out there, sometimes it is even published in top journals—and even good science is usually extremely limited insofar as you can use it in practice.
I think it’s just important to remember that while scientific papers should be given more weight than almost every other source of evidence, that’s not actually very much weight. You can’t instrumentally rely on a scientific finding unless it’s been replicated multiple times and/or has a well understood mechanism behind it.
Yes. You should read the papers. They’re garbage.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Okay now how much statistical training do you think people in public health, a department that is a total joke at most universities, have? Because I know how much statistical training the geostatistitians have at UT and they’re brain damaged. They can sure work a software package though...
“A bunch of” ~= the majority. I’m sure there could be a few, but it wouldn’t be characteristic. I’m not saying ALL the studies are going to be bad, just that bulk surveys are likely to be garbage.
Maybe I should have chosen “Theologians’ opinions on God” rather than “Middle aged/classed suburban nutritionists’ opinions on red meat”. I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Doctors not researchers in the top peer-reviewed papers.....
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research. The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Researchers who got there because other researchers said they were good. It’s circular logic.
It’s prima facie evidence. That’s all I hoped for. I haven’t actually done a SRS of journals by topic and figured out which ones are really BS. But of the subjects I do know about, almost all of the literature in “top peer reviewed” papers is garbage. This includes my own technical field of engineering/simulation.
Straw man. I did not say the statistics were not “perfect”. And I did not say they were “as accurate as a coin flip”. In the red meat example, they are worse.
A lot of LW is analytical.
Research is a good starting point to discover the dynamics of a certain issue. It doesn’t mean my final opinion depends on it.
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had “After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes.”
And then later, “These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat.” though I’m not sure if the latter was separate because it was specifically about /processed/ meat.
So long as they keep the claim as modest as ‘eating red meat “may” increase your risk of type II diabetes.’ it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.
If there’s better evidence that the search is not finding, that is a problem.
Red meat adds a literal sizzle to research papers.
Yes, there is A LOT of garbage. This is why I am recommending using heuristics such as numbers of citations—to maximize the accuracy of the information. And, yes, peer review is not perfect but compare journals/fields that rely on peer-review to those that do not...
Furthermore, Systematic Reviews have a pretty good track record as far as I know and this is why I recommend them.
This post is not so much about academically controversial issues but even in those cases if you don’t have any reasons not to, then siding with the majority will bring you to the truth more often than the alternative.
This is the type of thing that you see if you do a normal google search instead of a scholarly search. I have not checked but I bet that the most cited recent review articles on those issues can provide you with some pretty good information.
My argument really boils down to 2 things. Researchers being systematically biased (ex: red meat), and researchers having a very low probability of actually knowing the right answer but publishing something that fits some narrow set of data (ex: “advanced” simulation). To be sure, I’ve used research to make a lot of informed decisions over my lifetime, but it’s always been straightforward, pretty much unanimous, and with lots of testimonials from online groups to give it statistical mass.
Would you adopt this heuristic in any other scenario where the “right answer” isn’t obvious? Music, books, diet, politics, etc? Even when you restrict your sampling pool to “experts only”, the results are still pretty bad. These people are self-selecting to do research. It’s not like you’re picking a random disinterested intelligent person and asking them to study the problem. No one becomes a nutritionist because they have no opinion on food.
The overwhelming trend is fear mongering coming out of epidemiological studies.
I acknowledged that there are problems, nothing is perfect. But I don’t know what you want from me. To convince you that science as a whole works!? Or that information in studies is more accurate than made-up information?
All I am advocating is to look for ‘respected’ studies and look at them. If you don’t think that looking at studies ‘approved’ by the field gives you more accurate information than not doing it I can’t really do much.
Yes, I believe in science no matter what scenario I am in. You don’t need to blindly trust it or anything, I put different weights on different claims etc. but I would still take into account information from recent, well-cited meta-analyses or whatever I can get my hands on.
So I should worry that researchers are interested in the topic that they are researching. What douchbags, eh?
Okay. Citation? And remember we are not talking about ‘most studies’ or anything. The studies that we are talking about are well cited, by known researchers if possible and systematic reviews if possible.
This is exactly my point. Studies on many many subjects may not contain information more useful than coin flip, let alone an educated guess.
This is question begging. You have to have a theory about why a “respected” study is likely to be correct. I’ve already provided theories explaining why they’re likely to be incorrect a large portion of the time.
I believe in science too. But “science” and “science articles” are different things. But you didn’t answer my question, and I really want to drive home that almost no one thinks it’s a good idea to trust “majority expert opinion” in all sorts of areas.
Don’t be dense. You know exactly what I mean. A vegetarian goes to grad school and does research on nutrition. What do you think is going to happen?
Citations above where you commented. You can also just punch “red meat” into google scholar and it’s all about how you can die from it.
Wow. This is a pretty far-fetched claim..
My theory is that respected papers are done in a method more resembling the scientific method than coin flip on average and thus they get more accurate results than a coin flip. There, happy?
I did answer your question—the answer was yes.
Except, you know, the majority.
He is biased. So is the guy that went into grad school with anti-vegetarian views. If those guys are not changing their opinion based on the evidence then the chance is smaller (not nil though) that their papers will be highly cited.
You call studies that find correlations between things fear mongering? Oh my.
Oh my. Okay, first of all you can die of pretty much anything and pretty much anything has some dangers. Or at least that’s what does fear mongering scientists claim. The studies show you some numbers to guide you in how much danger X (in this case red meat) poses to specific individuals. Do you have any specific reason to think that those studies are fabricated and that in fact red meat has none of the effects that they claim?
Furthermore, if I tell you that drinking a large amount of water can kill you and do a study to prove it then am I a fear mongering scientist?
This is a pretty solid argument.
Thanks for clarifying. I disagree. See the systematic bias/complexity arguments.
Do you really choose your music based on the average opinion of “experts”? Give me a break. Look, if you could randomly draft 20 people who had demonstrated independent rationality and objectivity, assign them to a problem, and take the majority opinion, I would be fine with that. But that’s not what we have at all. Anyone with an IQ above 110 can get any degree they want.
Why would the best research win out? Why not the most fashionable research that confirms everyones’ worldviews? Why not the research that has the punchier abstract title? Why not the research that was fudged to show more impressive results?
They could probably find a correlation between eating red meat and watching action movies, but that’s not exactly publishable.
I mean sure, if you consumed more red meat than was physiologically possible to scarf down without choking, you’d die. But that’s not unique to red meat. They’re claiming that there is a unique property of red meat which causes all these health problems, so not it doesn’t fall under the same category as “pretty much anything can kill you”.
And no, they technically don’t even show danger. All they do is show correlations. Would you also conclude that wearing XXL t-shirts makes you fat?
Confounding variables mentioned above. Lack of replication/opposite findings in controlled studies. Testimonies from thousands of people on the paleo diet who have reversed their blood chemistry. Fat doctors/nutritionists, etc.
If you try to publish dozens of studies on it in the year 2012, yes you are.
“Hey guys I just did ANOTHER study showing that drinking 82 gallons of water in one sitting will kill you (p<0.05)”
That would be fear mongering, although people probably wouldn’t take it seriously.
Yes, except that I am the only expert on what music I like.
Are we talking about degrees here. I am pretty sure Ive been talking about top level articles. Or can anyone with an IQ above 110 publish one of those?
No winning out here. The research will be closer to the truth than a random answer because the accuracy of the theories gets compared to reality buy doing experiments for example. Or because not every single person is completely biased and blind to the results that they get.
Hey, that’s why they are correlations. I am not stopping you from believing that being predisposed to diabetes and cancer or whatever makes you more likely to eat red meat for example.
As I said in the other thread, I am not participating in this conversation any more.
Oh, so you agree there are can be good reasons to discount the “expert” establishment, no matter how much “peer review” or citations they have.
Yes. But getting a degree is normally a prereq for publishing, and everyone who gets a degree publishes something. And yes, you can publish in the “top” journal articles in grad school.
Not every single person has to be biased. Just enough of them.
But the researchers conclude that red meat increases your risk of heart disease simply because it is associated with heart disease. That is dishonest. If they can get away with blatantly unsubstantiated statements like that in epidemiological papers, what can’t they get away with buried in their SAS databases and algorithms?
If there is significant interest in this post I will write a second one illustrating how to conduct a search using those heuristics from start to finish with examples and maybe screenshots.
I’d be interested in reading an example post like this, especially if it included a section on how best to determine relevant search keywords for a topic you’re not particularly familiar with. This is something I find I have a fair amount of trouble with.
FWIW, in a few of my comments linked in http://lesswrong.com/r/discussion/lw/h3w/open_thread_april_115_2013/8p3q , I do unpack some of the steps I took to find what I did. Not really a full think-aloud protocol and I only did it in a few because it’s a real hassle to write it all down as I went (you can’t reconstruct it for anything but the simplest searches), but may be helpful nonetheless.
Every little bit helps, thanks.
I agree! So I created a thread to collect all that.
A problem with this heuristic is that one may also cite a paper in order to debunk it, call it ridiculous, etc. Bem’s psi paper has been cited 154 times, mostly by papers with titles such as “Fearing the Future of Empirical Psychology: Bem’s (2011) Evidence of Psi as a Case Study of Deficiencies in Modal Research Practice”. So do also take a look at what those citing papers actually say.
Also, some papers may seem to have high citation counts, until you realize that it all comes from the author citing himself in later papers, or from random websites that Google Scholar counts as cites even though they are just pages put up by somebody who might be just as ignorant about the topic as you are.
Yes, heuristics generally save you time in a lot of cases by sacrificing utility/accuracy in a small number of cases.
The biggest problem with scholarly searches is paywalls. :(
If you’re only going to read the abstract anyway, I don’t think that’s a problem; the abstracts are usually in front of the paywall, not behind it.
This is one of the reasons why I advocate reading the abstracts but I must be slightly alienated because it did not occur to me that people don’t know that abstracts are pretty much always free so I merely hinted at that fact. I will edit it in.
sci-hub.org
Problem solved. Go now, and be at peace.
No matter what I enter, it just goes to http://www.kremlin.ru/
Am I doing something wrong?
PS. You need to enter the URL of the article you want to read.
Update for anyone reading this today: /r/scholar has informed me that US IP addresses are rejected. You’ll have to change to a non-US IP if you want it to work.
I dunno, just checked, it worked for me.
I recall a site linked to recently that has many paywalled papers liberated, does anyone recall what it was called?
Feh. Dilettantes read the abstracts. Professionals read the Methods section.
Ok, but I am not a professional in the vast majority of fields I want to find studies in. I would go so far as to say I’m a dilettante in many of them.
even as a dilettante you can often dismiss the conclusions of a paper based on really obvious problems in the methodology (especially in nutrition/exercise/longevity research).
You often don’t have access to the full paper.
Also:
and
Speed-accuracy trade-off.
Interesting. I’d love to see an example of how you’d use Google Scholar practically. Could you pick a test question and walk us through your steps?