The “rules” of science, if they exist, are subject to change at any time.
Here’s a rule of science: Your hypothesis must make testable predictions. It must be falsifiable. Is that “subject to change at any time” ? I bet there are more.
While it may not perfectly describe how actual scientists do their work all the time, the scientific method is a description of the process of how we sort out good ideas/models from bad ones, which is the quintessential goal of science (the “advancement of science,” if you will).
Just to be clear on what we are discussing, here is the Oxford English Dictionary definition (I don’t like using dictionaries as authorities; I think it’s stupid. this is just to have a working definition on the table): “A method or procedure… consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”
In order for the scientific community to take a claim seriously, there are certain expectations that must be satisfied such as a reproducible experiment, peer reviewed publication, etc. When a hypothesis is proposed (assuming it has already met the baseline requirement of making testable predictions), it is thrust into the death pit of scientific inquiry where scientists do everything they can to test and falsify it. While the subject matter may span vastly different areas of science, this process is still generally followed.
Scientists who do science for a living may have gotten good at this process, so much so that they do it without belaboring each element as you would in a middle school science class, but they do it never the less. It is true that in the past, bad science happened, and even today lapses in scientific integrity happen; however, the reason science is given the authority that it is is due to it’s strict adherence to the above process. (Also, as a disclaimer, there are many nuances to said process that I glossed over; I just wanted to get the general idea.)
If I may go out on a limb here, it sounds to me like the chaos you are talking about is the unavoidably arbitrary nature of observation of phenomena and the unavoidably arbitrary nature of proposing hypotheses. Often times throughout history we have encountered entirely new areas of science by sheer accident. Likewise (unless they are making a phenomenological model) scientists have no better way to propose hypotheses than to guess at what the answer is based on observations that they currently have and then make new observations/experiments to see if they were right.
So I definitely agree with you on the chaotic nature of our stumbling across new phenomena on on our quest to understand reality, but to say that the process we go through to establish scientific knowledge is not systematic seems a bit extreme.
Interesting stuff. I am all for trying to improve peoples reasoning skills, and understanding how particular people think initially is a good place to start, but I’m a bit concerned about the way you talked about knowledge in here (and where it comes from).
Frankly, I wouldn’t really look to any person as a source of knowledge in the way you seem to be implying here.
Here’s how knowledge & experts work: There’s a whole bunch of information out there—literally more than any one person could/cares to know—and we simply don’t have the time (or often the background) to fully understand certain fields and more importantly to evaluate which claims are true and which aren’t. Experts are people who do have the background in a given field, and they usually know what research has been done in their field and can answer questions/make statements with legitimate authority when speaking on the subject with which they are well versed. Once you have a consensus of opinion between many such experts, you have raised the authority of that opinion further because you’ve reduced the likelihood of one guy misspeaking, making stuff up, being dishonest, etc. Also note that experts talking on subjects outside of their field of study have no more authority than anyone else (though they often are well informed on other subjects) - this is where the argument from authority fallacy comes from, e.g. “Einstein said that the sky is green” … so what?
I suspect you know all of this already (I don’t mean to come off as lecturing too much, just reiterating some baseline stuff)
After all that rambling about experts, the important thing to take away is that the knowledge (and by knowledge in this context I mean being aware of information which corresponds with reality, i.e. the truth) doesn’t come from the experts; experts are just the people who go about investigating the truth and report back to the rest of humanity what they’ve found. In other words, reality is objective and claims should be evaluated based on their evidence, not the person who proposes them.
All of the examples you’ve used deal with things which actually do have an objective answer, whether or not we have or feasibly can test them empirically. (also, as a side note, that bit about the 50% chance of being true is ridiculous even if you don’t have any knowledge going into it—you would simply say “I don’t know if these claims are true”)
People definitely have biases, and we should be particularly cautious when dealing with any claims that are related to contentious issues. Further, I’d like to stress the point that just because a large majority of the experts in a field say something it doesn’t make it true—but it does mean that we should believe that it is true until new information says otherwise, because frankly an expert consensus is one of the highest certainties we can come up with as a species.
I guess the main thing I am trying to say that directly ties into your post is that we shouldn’t really care how someone formed their beliefs when evaluating the veracity of a claim; when we should care is:
When we suspect that a bias may have lead to a false reporting of real information (in which case we would want independent, unbiased research/reporting)
When we want to change someone’s mind about something
When we want to keep someone’s faulty & infectious belief structure from propagating to other people (ex. Dark Side Epistemology) by teaching other people critical thinking/rationality and common mistakes like said structure.
Still, figuring out how people think has always been an interesting area of science that is worth pursuing, and the tools/sample size have gotten a lot bigger since the time of case studies. I hope you find more interesting stuff to share.