I recently commented on one of my friends’ Facebook posts in regards to the Bill Nye/Ken Ham debate. One of the issues I brought up was that Ham’s Creationism lacked the qualities that we would usually associate with good explanations, namely what I called “precision”. Which I defined as:
Good explanations exclude more possible evidence than bad explanations. Let’s say that you have two friends who collect marbles. One friend collects only black marbles while the other collects every single color marble he can get his hands on. If your plumbing problems started after both friends were over for a few hours, and a black marble was found in your pipes, it’s much more likely that your friend who only collects black marbles caused it than your friend who collects all marble colors; even though it’s known that both friends own black marbles.
I’m pretty sure I made up this definition of “precision” because upon Google-ing I can’t find any definition of “precision” that matches this. More importantly, I can’t really find any sort of list that enumerates the items that separate good explanations from bad explanations. The person I posted this in response to rightly pointed this out, so “good explanation” seems entirely subjective from his point of view. Any ideas on how to close the inferential gap between us using a more authoritative source than just my say so?
My guess is the person most likely to defend this criterion is a Popperian of some flavor, since precise explanations (as you define them) can be cleanly falsified.
While it’s nice when something is cleanly falsified, it’s not clear we should actively strive for precision in our explanations. An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn’t mean we should give it an extra penalty.
If all explanations have equal prior probability, then Bayesian reasoning will tend to favor the most precise explanations consistent with the evidence. Seeing a black marble is most likely when all the marbles in a collection are black. If you then found a red marble, that would definitely rule out the black collection (assuming they both had to come from the same one). The best candidate would then be one that is half each. Ultimately, this all comes back down to likelihoods though, so I’m not sure the idea of precision adds much.
I agree it’s very Popperian, but I purposefully shied away from mentioning anything “science” related since that seemed to be a source of conflict; this person specifically thinks that science is just something that people with lab coats do and is part of a large materialist conspiracy to reject morality. But leaving any “science-y” words out of it and relying on axioms of probability theory, he rejoined with something along the lines of “real life isn’t a probability game”. I kinda just threw up my hands at that point, telling myself that the inferential distance is too large to cross.
An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn’t mean we should give it an extra penalty.
You shouldn’t give it an extra penalty. He’s just using an unusual method for explaining the first penalty. The penalty due to the fact that the friend who has all colors of marbles is less likely to drop a black one is equivalently stated as a penalty due to the fact that he has more possible colors he can drop.
An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn’t mean we should give it an extra penalty.
A straw Popperian could say that the hypothesis “flipping the coin provides random results” is unscientific, because it allows any results, and thus it cannot be falsified.
I recently commented on one of my friends’ Facebook posts in regards to the Bill Nye/Ken Ham debate. One of the issues I brought up was that Ham’s Creationism lacked the qualities that we would usually associate with good explanations, namely what I called “precision”. Which I defined as:
I’m pretty sure I made up this definition of “precision” because upon Google-ing I can’t find any definition of “precision” that matches this. More importantly, I can’t really find any sort of list that enumerates the items that separate good explanations from bad explanations. The person I posted this in response to rightly pointed this out, so “good explanation” seems entirely subjective from his point of view. Any ideas on how to close the inferential gap between us using a more authoritative source than just my say so?
Reminds me of ‘discrimination’. See Yudkowsky. (I’d link to WP, but seems there’s no article there on the term.)
Have you read A Technical Explanation of Technical Explanation
My guess is the person most likely to defend this criterion is a Popperian of some flavor, since precise explanations (as you define them) can be cleanly falsified.
While it’s nice when something is cleanly falsified, it’s not clear we should actively strive for precision in our explanations. An explanation that says all observations are equally likely is hard to disprove and hence hard to gather evidence for by conversation of evidence, but that doesn’t mean we should give it an extra penalty.
If all explanations have equal prior probability, then Bayesian reasoning will tend to favor the most precise explanations consistent with the evidence. Seeing a black marble is most likely when all the marbles in a collection are black. If you then found a red marble, that would definitely rule out the black collection (assuming they both had to come from the same one). The best candidate would then be one that is half each. Ultimately, this all comes back down to likelihoods though, so I’m not sure the idea of precision adds much.
I agree it’s very Popperian, but I purposefully shied away from mentioning anything “science” related since that seemed to be a source of conflict; this person specifically thinks that science is just something that people with lab coats do and is part of a large materialist conspiracy to reject morality. But leaving any “science-y” words out of it and relying on axioms of probability theory, he rejoined with something along the lines of “real life isn’t a probability game”. I kinda just threw up my hands at that point, telling myself that the inferential distance is too large to cross.
You shouldn’t give it an extra penalty. He’s just using an unusual method for explaining the first penalty. The penalty due to the fact that the friend who has all colors of marbles is less likely to drop a black one is equivalently stated as a penalty due to the fact that he has more possible colors he can drop.
A straw Popperian could say that the hypothesis “flipping the coin provides random results” is unscientific, because it allows any results, and thus it cannot be falsified.