What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems—e.g. rationalism and empiricism, or skepticism and “faith”—double crux should, if used correctly, eventually lead all disagreements back to epistemology, at which point… what, exactly? Use double-crux again? What if the parties don’t have a meta-epistemological system in common, or indeed, any nth-order epistemological system in common? Double crux sounds really useful, and this is a great post, but a system for resolving epistemological disputes would be extremely helpful as well (especially for those of us who regularly converse with “faith”-ists about philosophy).
Is there good reason to believe that any method exists that will reliably resolve epistemological disputes between parties with very different underlying assumptions?
Not if they’re sufficiently different. Even within Bayesian probability (technically) we have an example in the hypothetical lemming race with a strong Gambler’s Fallacy prior. (“Lemming” because you’d never meet a species like that unless someone had played games with them.)
On the other hand, if an epistemological dispute actually stems from factual disagreements, we might approach the problem by looking for the actual reasons people adopted their different beliefs before having an explicit epistemology. Discussing a religious believer’s faith in their parents may not be productive, but at least progress seems mathematically possible.
Not particularly, no. In fact, there probably is no such method—either the parties must agree to disagree (which they could honestly do if they’re not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.
What if the disagreeing parties have radical epistemological differences? Double crux seems like a good strategy for resolving disagreements between parties that have an epistemological system in common (and access to the same relevant data), because getting to the core of the matter should expose that one or both of them is making a mistake. However, between two or more parties that use entirely different epistemological systems—e.g. rationalism and empiricism, or skepticism and “faith”—double crux should, if used correctly, eventually lead all disagreements back to epistemology, at which point… what, exactly? Use double-crux again? What if the parties don’t have a meta-epistemological system in common, or indeed, any nth-order epistemological system in common? Double crux sounds really useful, and this is a great post, but a system for resolving epistemological disputes would be extremely helpful as well (especially for those of us who regularly converse with “faith”-ists about philosophy).
Is there good reason to believe that any method exists that will reliably resolve epistemological disputes between parties with very different underlying assumptions?
Alcohol helps :-P
Not if they’re sufficiently different. Even within Bayesian probability (technically) we have an example in the hypothetical lemming race with a strong Gambler’s Fallacy prior. (“Lemming” because you’d never meet a species like that unless someone had played games with them.)
On the other hand, if an epistemological dispute actually stems from factual disagreements, we might approach the problem by looking for the actual reasons people adopted their different beliefs before having an explicit epistemology. Discussing a religious believer’s faith in their parents may not be productive, but at least progress seems mathematically possible.
Not particularly, no. In fact, there probably is no such method—either the parties must agree to disagree (which they could honestly do if they’re not all Bayesians), or they must persuade each other using rhetoric as opposed to honest, rational inquiry. I find this unfortunate.
Could you elaborate on that? Sorry, I just don’t get it.
It’s a hint at Aumann’s theorem.
Oh, I wasn’t aware that they had to be Bayesian for that rule to apply, thanks for the help.