This, I think, is just one symptom of a more general problem with scientists: they don’t emphasize rigorous logic as much as they should. Science, after all, is not only about (a) observation but about (b) making logical inferences from observation. Scientists need to take (b) far more seriously (not that all don’t, but many do not). You’ve heard the old saying “Scientists make poor philosophers.” It’s true (or at least, true more often than it should be). That has to change. Scientists ought to be amongst the best philosophers in the world, precisely because they ought to be masters of logic.
The problem is that philosophers also make poor philosophers.
Less snarkily, “logical inference” is overrated. It does wonders in mathematics, but rarely does scientific data logically require a particular conclusion.
Well, of course one cannot logically and absolutely deduce much from raw data. But with some logically valid inferential tools in our hands (Occam’s razor, Bayes’ Theorem, Induction) we can probabilistically derive conclusions.
Well, it is not self-contradictory, for one thing. For another thing, every time a new postulate or assumption is added to a theory we are necessarily lowering the prior probability because that postulate/assumption always has some chance of being wrong.
Just to clarify something: I would expect most readers here would interpret “logically valid” to mean something very specific—essentially something is logically valid if it can’t possibly be wrong, under any interpretation of the words (except for words regarded as logical connectives). Self-consistency is a much weaker condition than validity.
Also, Occam’s razor is about more than just conjunction. Conjunction says that “XY” has a higher probability than “XYZ”; Occam’s razor says that (in the absence of other evidence), “XY” has a higher probability than “ABCDEFG”.
I think Occam’s razor is logically valid in the sense that, although it doesn’t always provide the correct answer, it is certain that it will probably provide the correct answer. Also, I’m not sure if I understand your point about conjunction. I’ve always understood “do not multiply entities beyond necessity” to mean that, all else held equal, you ought to make the fewest number of conjectures/assumptions/hypotheses possible.
The problem is that the connotations of philosophy (in my mind at least) are more like how-many-angels mindwanking than like On the electrodynamics of moving bodies. (This is likely the effect of studying pre-20th-century philosophers for five years in high school.)
Saying that people should be better is not helpful. Like all people, scientists have limited time and need to choose how to allocate their efforts. Sometimes more observations can solve a problem, and sometimes more careful thinking is necessary. The appropriate allocation depends on the situation and the talents of the researcher in question.
That being said, there may be a dysfunctional bias in how funding is allocated—creating a “all or none” environment where the best strategy for maintaining a basic research program (paying for one’s own salary plus a couple of students) is to be the type of researcher who gets multi-million dollar grants and uses that money to generate gargantuan new datasets, which can then provide the foundation for a sensational publication that everyone notices.
This, I think, is just one symptom of a more general problem with scientists: they don’t emphasize rigorous logic as much as they should. Science, after all, is not only about (a) observation but about (b) making logical inferences from observation. Scientists need to take (b) far more seriously (not that all don’t, but many do not). You’ve heard the old saying “Scientists make poor philosophers.” It’s true (or at least, true more often than it should be). That has to change. Scientists ought to be amongst the best philosophers in the world, precisely because they ought to be masters of logic.
The problem is that philosophers also make poor philosophers.
Less snarkily, “logical inference” is overrated. It does wonders in mathematics, but rarely does scientific data logically require a particular conclusion.
Well, of course one cannot logically and absolutely deduce much from raw data. But with some logically valid inferential tools in our hands (Occam’s razor, Bayes’ Theorem, Induction) we can probabilistically derive conclusions.
In what sense Occam’s razor “logically valid”?
Well, it is not self-contradictory, for one thing. For another thing, every time a new postulate or assumption is added to a theory we are necessarily lowering the prior probability because that postulate/assumption always has some chance of being wrong.
Just to clarify something: I would expect most readers here would interpret “logically valid” to mean something very specific—essentially something is logically valid if it can’t possibly be wrong, under any interpretation of the words (except for words regarded as logical connectives). Self-consistency is a much weaker condition than validity.
Also, Occam’s razor is about more than just conjunction. Conjunction says that “XY” has a higher probability than “XYZ”; Occam’s razor says that (in the absence of other evidence), “XY” has a higher probability than “ABCDEFG”.
Hi Giles,
I think Occam’s razor is logically valid in the sense that, although it doesn’t always provide the correct answer, it is certain that it will probably provide the correct answer. Also, I’m not sure if I understand your point about conjunction. I’ve always understood “do not multiply entities beyond necessity” to mean that, all else held equal, you ought to make the fewest number of conjectures/assumptions/hypotheses possible.
The problem is that the connotations of philosophy (in my mind at least) are more like how-many-angels mindwanking than like On the electrodynamics of moving bodies. (This is likely the effect of studying pre-20th-century philosophers for five years in high school.)
21st century philosophers aren’t much different.
aoeu
Saying that people should be better is not helpful. Like all people, scientists have limited time and need to choose how to allocate their efforts. Sometimes more observations can solve a problem, and sometimes more careful thinking is necessary. The appropriate allocation depends on the situation and the talents of the researcher in question.
That being said, there may be a dysfunctional bias in how funding is allocated—creating a “all or none” environment where the best strategy for maintaining a basic research program (paying for one’s own salary plus a couple of students) is to be the type of researcher who gets multi-million dollar grants and uses that money to generate gargantuan new datasets, which can then provide the foundation for a sensational publication that everyone notices.