The best description of the scientific method I have ever seen is from a conceptual physics textbook by Hobson. Paraphrasing:
The scientific method involves the dynamic interplay between theory and experiment.
That’s it. Perfect. As a scientist, I don’t come to work on Monday and make an observation, then form a hypothesis on Tuesday, devise an experiment to test some prediction on Wednesday, perform the experiment Thursday, and interpret the result on Friday. On any given day I would be hard pressed to tell you where I am in the process. All of the above, really. It’s a mess. It’s a constant back and forth comparing theoretical expectations to the final arbiter of any dispute: nature. Some people specialize in one aspect of the process, and can spend years chewing on some piece of it. But it is seldom done in isolation.
Meanwhile, science fair projects across the nation—under the advisement of teachers who themselves often do not have personal experience in how science really works—approach their subject in an uncharacteristically formulaic way. Nine times out of ten the effort culminates in a proof that the initial hypothesis was right; as if that were the goal and criterion for success. The rare student is surprised by the data, admitting to a failure of the hypothesis, quickly reconsidering initial assumptions and driving into an unexpected yet rewarding direction (dynamic interplay). That’s the real scientist at work. Too bad the judges (in my experience as a judge) often don’t recognize this apparent failure as the true success.
I can’t pass up the opportunity to share with you the “best” high school science fair project I ever saw (when I was myself a student participant in the fair—and no, it was not my project): “Does light travel through the dark?” Setup: light-tight cardboard box painted black on the inside; flashlight shining through a hole in one end; a peephole in the other end to see if the light made it. Any guesses?
What would be a better way to teach young children about the nuances of the scientific method? This isn’t meant as a snarky reply. I’m reasonably confident that Tom Murphy is onto something here, and I doubt most elementary school science fairs are optimized for conveying scientific principles with as much nuance as possible.
But it’s not clear to me what sort of process would be much better, and even upon reading the full post, the closest he comes to addressing this point is “don’t interpret failure to prove the hypothesis as failure of the project.” Good advice to be sure, but it doesn’t really go to the “dynamic interplay” that he characterizes as so important. Maybe instruct that experiments should occur in multiple rounds, and that participants will be judged in large part by how they incorporate results from previous rounds into later ones? That would probably be better, although I imagine you’d start brushing up pretty quickly against basic time and energy constraints—how many elementary schools would be willing and able to keep students participating in year-long science projects?
That’s not to say we shouldn’t explore options here, but it might be that, especially for young children, traditional one-off science fairs do a decent enough job of teaching the very basic idea that beliefs are tested by experiment. Maybe that’s not so bad, akin to why Mythbusters is a net positive for science.
Well, doing experiments to test which of several plausible hypotheses is more accurate, rather than those where you can easily guess what’s going to happen beforehand, would be a start. (Testing whether light can travel through the dark? Seriously, WTF?)
As a scientist, I don’t come to work on Monday and make an observation, then form a hypothesis on Tuesday, devise an experiment to test some prediction on Wednesday, perform the experiment Thursday, and interpret the result on Friday. On any given day I would be hard pressed to tell you where I am in the process. All of the above, really. It’s a mess.
That is a large part of the reason why we have problems like the file drawer effect and data dredging.
Yeah. But I think there are different levels of propriety, and that is what the quote is getting at. We should mention that the ideal form of science would look very rigid and modular and be without bias. Then, we should talk about how actual science inevitably involves biases and errors, and that these biases to a limited extent are sometimes compensated by increased efficiency. Then, we should talk about how to minimize biases while maximizing the efficiency of our thought processes.
Level One: Ideal
Level Two: Reality
Level Three: Pragmatic Ideal
A class or book on Level Three would be very useful to me and I’m not aware of any. Anyone have suggestions? Less Wrong seems to cover Level One very well and Level Two is obvious to anyone who is a human being but Level Three is what I would really like to work on.
-- Tom Murphy
What would be a better way to teach young children about the nuances of the scientific method? This isn’t meant as a snarky reply. I’m reasonably confident that Tom Murphy is onto something here, and I doubt most elementary school science fairs are optimized for conveying scientific principles with as much nuance as possible.
But it’s not clear to me what sort of process would be much better, and even upon reading the full post, the closest he comes to addressing this point is “don’t interpret failure to prove the hypothesis as failure of the project.” Good advice to be sure, but it doesn’t really go to the “dynamic interplay” that he characterizes as so important. Maybe instruct that experiments should occur in multiple rounds, and that participants will be judged in large part by how they incorporate results from previous rounds into later ones? That would probably be better, although I imagine you’d start brushing up pretty quickly against basic time and energy constraints—how many elementary schools would be willing and able to keep students participating in year-long science projects?
That’s not to say we shouldn’t explore options here, but it might be that, especially for young children, traditional one-off science fairs do a decent enough job of teaching the very basic idea that beliefs are tested by experiment. Maybe that’s not so bad, akin to why Mythbusters is a net positive for science.
Well, doing experiments to test which of several plausible hypotheses is more accurate, rather than those where you can easily guess what’s going to happen beforehand, would be a start. (Testing whether light can travel through the dark? Seriously, WTF?)
That is a large part of the reason why we have problems like the file drawer effect and data dredging.
I don’t think that thinking categorically and mechanically would be feasibly productive.
It’s a reality that we have to think messily in order to solve problems quickly, even if that efficiency also causes biases.
However, we should at least be aware of what the proper way to do it would be.
Yeah. But I think there are different levels of propriety, and that is what the quote is getting at. We should mention that the ideal form of science would look very rigid and modular and be without bias. Then, we should talk about how actual science inevitably involves biases and errors, and that these biases to a limited extent are sometimes compensated by increased efficiency. Then, we should talk about how to minimize biases while maximizing the efficiency of our thought processes.
Level One: Ideal
Level Two: Reality
Level Three: Pragmatic Ideal
A class or book on Level Three would be very useful to me and I’m not aware of any. Anyone have suggestions? Less Wrong seems to cover Level One very well and Level Two is obvious to anyone who is a human being but Level Three is what I would really like to work on.