I generally agree here, but I think it gives too little benefit to genetic reasoning.
For example, I sometimes listen to Neal Bortz when driving, due to the channel already being set when I started the car. One day he suddenly started going on and on about drilling for oil off the coast and in Alaska. This was at the exact time the Mcain campaign and Republicans in general started a coordinated effort to push this issue, probably to play election politics with oil prices.
Anyway, Bortz has lots of reasonable arguments to support his claim that we should be drilling, and there is a pretty strong case to be made for it in general, and he doesn’t use underhanded arguments about oil prices, and he admits it will be 10 years before the oil starts to flow—in other words, he is not deceptive. However, he does not give any fair analysis of the arguments against drilling (probably due to not fully understanding them).
What I’m saying is that, if your goal is to set some rules of thumb to help you best find the truth despite mental biases, you should discount any argument that seems to come from some sort of sales pitch, even if it is well documented and researched, with supporting evidence. The rule is not easy to state succinctly, but it is basically: “You have to heavily discount any argument made by a group who will make money if they can successfully persuade people.” Notice that the rule makes no mention of the quality of the evidence! That is because no evidence can be trusted if the source is biased, even if that source has no dishonest intentions.
Hypothetical example: A scientist working for Pharma is testing the safety of a potential drug. The thing most likely to derail the drug is side effect X. The scientist and Pharma work very diligently and prove that side effect X is not associated with this drug. However, because the research was oriented at proving the drug safe, vs determining it’s safety, almost all the brain-hours went toward thinking about things like “how do you control this experiment to ensure that such and such is controlled for” and not thinking about other safety issues. Perhaps the pills then cause some unforeseen side effect, while not causing any that were considered at issue.
In that example, everyone acted honestly, but the research cannot be accepted with as much weight as independent testing, because there is unavoidable bias.
I generally agree here, but I think it gives too little benefit to genetic reasoning.
For example, I sometimes listen to Neal Bortz when driving, due to the channel already being set when I started the car. One day he suddenly started going on and on about drilling for oil off the coast and in Alaska. This was at the exact time the Mcain campaign and Republicans in general started a coordinated effort to push this issue, probably to play election politics with oil prices.
Anyway, Bortz has lots of reasonable arguments to support his claim that we should be drilling, and there is a pretty strong case to be made for it in general, and he doesn’t use underhanded arguments about oil prices, and he admits it will be 10 years before the oil starts to flow—in other words, he is not deceptive. However, he does not give any fair analysis of the arguments against drilling (probably due to not fully understanding them).
What I’m saying is that, if your goal is to set some rules of thumb to help you best find the truth despite mental biases, you should discount any argument that seems to come from some sort of sales pitch, even if it is well documented and researched, with supporting evidence. The rule is not easy to state succinctly, but it is basically: “You have to heavily discount any argument made by a group who will make money if they can successfully persuade people.” Notice that the rule makes no mention of the quality of the evidence! That is because no evidence can be trusted if the source is biased, even if that source has no dishonest intentions.
Hypothetical example: A scientist working for Pharma is testing the safety of a potential drug. The thing most likely to derail the drug is side effect X. The scientist and Pharma work very diligently and prove that side effect X is not associated with this drug. However, because the research was oriented at proving the drug safe, vs determining it’s safety, almost all the brain-hours went toward thinking about things like “how do you control this experiment to ensure that such and such is controlled for” and not thinking about other safety issues. Perhaps the pills then cause some unforeseen side effect, while not causing any that were considered at issue.
In that example, everyone acted honestly, but the research cannot be accepted with as much weight as independent testing, because there is unavoidable bias.