I feel like there’s a third way people tend to act regarding theories and evidence. “I’m curious about this topic, so I’m going to gather observations about it, and seek out new ways to make observations”.
I feel like this is actually the attitude that should preceed either of the other two when approaching a novel area of research.
An example that comes to mind for me is the invention of the microscope, and the revolution in biology that this led to.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4422127/#s1title
I would argue that the first microscopes were made because people were curious. And then observations were made and shared using these new tools. I think it’s important that knowledge gathering and sharing tends to preceed that knowledge being considered evidence for our against a hypothesis.
So I’d describe these steps:
Unfocused curiosity-driven knowledge gathering and sharing
Tentative hypothesis formation
Focused knowledge gathering to explore the tentative hypotheses
After a critical mass of evidence has been gathered, and many tentative hypotheses resolved, a possible theory takes shape.
This is the step of asking “What theory best explains this evidence?”
Then you actively seek evidence which would disprove the theory. As you rule out possible ways the theory could be disproven you gradually become more confident the theory is correct.
Then you teach others about the theory. This is where you explain how the evidence gathered so far fits the theory.
At this step someone might ask, “What’s the evidence for this theory?” in order to evaluate whether to accept your theory. That seems to me to fundamentally be a question asked by a theory-learner not a theory-creator. A wisely critical consumer of education asks this about a theory before accepting the theory as part of their worldview.
It seems “unfocused curiosity-driven knowledge gathering” makes sense if it is driven by some form of instrumental goal, as in invention and engineering, which produce useful stuff rather than general explanations of the world. At least if I imagine it as gathering data from trial and error tinkering, as presumably the invention of the microscope was done.
But for science it is mostly the case that we already have far more data than we can make sense of. So the main problem is to explain known phenomena, and additional data gathering is only necessary to distinguish between competing explanations or to potentially rule out existing ones. There are exceptions though—e.g. astronomy and history and paleontology try to create big catalogues of data in some subject area
That’s so interesting. Such a very different view of science from mine. I feel like it seems like there’s a lot of data sometimes, but then getting down into the weeds on some particular narrow question and suddenly I always give myself lacking the exact data I would need. Or a new method or tool opens up a new type of data...
I feel like there’s a third way people tend to act regarding theories and evidence. “I’m curious about this topic, so I’m going to gather observations about it, and seek out new ways to make observations”. I feel like this is actually the attitude that should preceed either of the other two when approaching a novel area of research.
An example that comes to mind for me is the invention of the microscope, and the revolution in biology that this led to. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4422127/#s1title I would argue that the first microscopes were made because people were curious. And then observations were made and shared using these new tools. I think it’s important that knowledge gathering and sharing tends to preceed that knowledge being considered evidence for our against a hypothesis. So I’d describe these steps:
Unfocused curiosity-driven knowledge gathering and sharing
Tentative hypothesis formation
Focused knowledge gathering to explore the tentative hypotheses
After a critical mass of evidence has been gathered, and many tentative hypotheses resolved, a possible theory takes shape. This is the step of asking “What theory best explains this evidence?”
Then you actively seek evidence which would disprove the theory. As you rule out possible ways the theory could be disproven you gradually become more confident the theory is correct.
Then you teach others about the theory. This is where you explain how the evidence gathered so far fits the theory. At this step someone might ask, “What’s the evidence for this theory?” in order to evaluate whether to accept your theory. That seems to me to fundamentally be a question asked by a theory-learner not a theory-creator. A wisely critical consumer of education asks this about a theory before accepting the theory as part of their worldview.
It seems “unfocused curiosity-driven knowledge gathering” makes sense if it is driven by some form of instrumental goal, as in invention and engineering, which produce useful stuff rather than general explanations of the world. At least if I imagine it as gathering data from trial and error tinkering, as presumably the invention of the microscope was done.
But for science it is mostly the case that we already have far more data than we can make sense of. So the main problem is to explain known phenomena, and additional data gathering is only necessary to distinguish between competing explanations or to potentially rule out existing ones. There are exceptions though—e.g. astronomy and history and paleontology try to create big catalogues of data in some subject area
That’s so interesting. Such a very different view of science from mine. I feel like it seems like there’s a lot of data sometimes, but then getting down into the weeds on some particular narrow question and suddenly I always give myself lacking the exact data I would need. Or a new method or tool opens up a new type of data...