I’d modus tollens your modus ponens. Except for ideas with only one version, like Timecube, there are none-obvious truths which can be extracted. For instance, Mormons have contributed positively to the LW memepool. But the marginal cost of delving deep into real crackpottery probably isn’t worth the marginal benefit in truth.
This is all well and good, but imagine that, instead of living in a word where people generally don’t communicate optimally and tend to irrationally cling to their memes, we live in the world of rational discourse, where truths are allowed to naturally bubble up to the surface and manifest as similar conclusions from disparate experiences.
In this hypothetical world you would benefit from arguing with a crackpot — you would supply xem with the evidence xe overlooked (because from within xyr model it felt irrelevant, so xe didn’t pursue it — that’s how I imagine one could end up with crackpot beliefs in a rational world), and xyr non-obvious truth would come up as a reason for xyr weird world-view. In that situation marginal benefit of engagement is high, because behind most crackpot theories there would be an extremely rare, and thus valuable experience (= piece of evidence about a nature of your common world), and the marginal cost of engagement is diminished because your effort is expended on adjusting both your and xyr map, and not on defeating their cognitive defenses.
With me so far? It gets better. There’s no hard and fast boundary between our world and the one painted above. And there are different kinds of crackpots. I’m pretty sure that there are many people with beliefs that you have good enough reasons to dismiss, yet which make total sense to somebody with their experiences. And many of them can be argued with. They may be genuinely in interested in finding the truth, or winning at life, or hearing out contrarian opinions. They may be not shunned by society enough to develop thick defenses. They may be smart and rational (as far as humans go, which is not very far.)
So finding the right kind of crackpots becomes a lucrative problem — source of valuable insights and debating practice.
I’d modus tollens your modus ponens. Except for ideas with only one version, like Timecube, there are none-obvious truths which can be extracted. For instance, Mormons have contributed positively to the LW memepool. But the marginal cost of delving deep into real crackpottery probably isn’t worth the marginal benefit in truth.
This is all well and good, but imagine that, instead of living in a word where people generally don’t communicate optimally and tend to irrationally cling to their memes, we live in the world of rational discourse, where truths are allowed to naturally bubble up to the surface and manifest as similar conclusions from disparate experiences.
In this hypothetical world you would benefit from arguing with a crackpot — you would supply xem with the evidence xe overlooked (because from within xyr model it felt irrelevant, so xe didn’t pursue it — that’s how I imagine one could end up with crackpot beliefs in a rational world), and xyr non-obvious truth would come up as a reason for xyr weird world-view. In that situation marginal benefit of engagement is high, because behind most crackpot theories there would be an extremely rare, and thus valuable experience (= piece of evidence about a nature of your common world), and the marginal cost of engagement is diminished because your effort is expended on adjusting both your and xyr map, and not on defeating their cognitive defenses.
With me so far? It gets better. There’s no hard and fast boundary between our world and the one painted above. And there are different kinds of crackpots. I’m pretty sure that there are many people with beliefs that you have good enough reasons to dismiss, yet which make total sense to somebody with their experiences. And many of them can be argued with. They may be genuinely in interested in finding the truth, or winning at life, or hearing out contrarian opinions. They may be not shunned by society enough to develop thick defenses. They may be smart and rational (as far as humans go, which is not very far.)
So finding the right kind of crackpots becomes a lucrative problem — source of valuable insights and debating practice.
Weakly related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/ and http://en.wikipedia.org/wiki/God_of_the_gaps