Surely the truth about knowledge and justification isn’t correlated with which school you went to
It seems pretty likely that there is some correlation. (Suppose, without loss of generality, that some kind of epistemic externalism is true. Then some schools—the ones where externalism is popular—correlate with the production of true beliefs about epistemology.) The problem is just that we don’t know in advance which schools actually have it right.
Perhaps what you mean to suggest is that going to school X isn’t a sensitive method of belief formation. Even if what it taught was false (like school Y), you would still end up believing it (just as the students of Y do).
Then again, one could say much the same thing about being born into a religious cult (or a different historical epoch). I do consider myself lucky to have avoided such an epistemically stunted upbringing. But this kind of luck does not in any way undermine my present beliefs.
In Meta-Coherence vs. Humble Convictions, I suggest that what matters is how you assess the alternative epistemic position. If you really think it is just as well-informed and well-supported as your own, then this should undermine your present beliefs. Otherwise, it need not. To consider oneself the beneficiary of epistemic luck is not necessarily irresponsible. (Indeed, it’s arguably necessary to avoid radical skepticism!)
Right, they would inferentially interact with them. Causal map:
Universe’s structure (plus some other stuff) --> Philosophical Truth
From observing the universe, you should make (some) change to your estimates of philosophical truths, but the truths don’t cause the universe—just the reverse.
What Alicorn was saying, I think, is that there’s no “my choice of school” node that points to (i.e. is a cause of) Philosophical Truth. Rather, such a node would at best point to “my beliefs”.
(And ideally, you’d want the universe’s structure to be a cause of your school’s theories...)
ETA: Related: Argument screens off authority. Silas summary: truth causally flows to proxies for the truth, sometimes through multiple intermediate stages. E.g. truth of a position causes good arguments for it which cause experts to believe it which cause good schools to teach it. But he closer you are to truth in the causal chain, the more you have screened off and made irrelevant.
What Alicorn was saying, I think, is that there’s no “my choice of school” node that points to (i.e. is a cause of) Philosophical Truth. Rather, such a node would at best point to “my beliefs”.
Again, how does the ‘my choice of school’ node here differ from the ‘my not being born into a cult’ node? The latter doesn’t cause philosophical truths either. (Strictly speaking nothing does: only contingent things have causes, and philosophical truths aren’t contingent on how things turn out. But let’s put that aside for now.) What it does is provide me with habits of thought that do a better job of producing true beliefs than the mental habits I would have acquired if born into a cult. But then different schools of philosophy teach different habits of thought too (that’s why they reach different conclusions). The flaws in the other schools of thought are much less obvious than the flaws found in cults, but that’s just a difference in degree...
Again, how does the ‘my choice of school’ node here differ from the ‘my not being born into a cult’ node? The latter doesn’t cause philosophical truths either.
Right, it doesn’t. But they’re still going to be inferentially connected (d-connected in Judea Pearl’s terminology) because both a) your beliefs (if formed through a reliable process), and b) philosophical truths, will be caused by the same source.
And just a terminology issue: I was being a bit sloppy here, I admit. “X causes Y”, in the sense I was using it, means “the state of X is a cause of the state of Y”. So it would be technically correct but confusing to say, “Eating unhealthy foods causes long life”, because it means “Whether you eat unhealthy foods is a causal factor in whether you have a long life”.
(Strictly speaking nothing does: only contingent things have causes, and philosophical truths aren’t contingent on how things turn out. But let’s put that aside for now.)
Yes, I assumed that how philosophers define the terms, but a) I don’t find such a category useful because b) of all the instances where philosophers had to revise their first-principles derivations based on subtle assumptions about how the universe works.
What [my education in philosophy] does is provide me with habits of thought that do a better job of producing true beliefs than the mental habits I would have acquired if born into a cult. But then different schools of philosophy teach different habits of thought too (that’s why they reach different conclusions). The flaws in the other schools of thought are much less obvious than the flaws found in cults, but that’s just a difference in degree...
I actually agree. Still, to the extent that they do converge on reliable truth finding mechanisms, they should converge on the same truth-finding mechanisms. And one’s admission that one’s own truth-finding mechanism is so heavily school-dependent would indeed be quite worrisome, as it indicates insufficient critical analysis of what one was taught.
Of course, merely being critical is insufficient (someone who said so in this discussion was rightfully modded down for such a simplistic solution). I would say that you additionally have to check that the things you learn are multiply and deeply connected to the rest of your model of the world, and not just some “dangling node”, immune to the onslaught of evidence from other fields.
I don’t find such a category [the ‘non-contingent’] useful because b) of all the instances where philosophers had to revise their first-principles derivations based on subtle assumptions about how the universe works.
This sounds like a metaphysics-epistemology confusion (or ‘territory-map confusion’, as folks around here might call it). It’s true that empirical information can cause us to revise our ‘a priori’ beliefs. (Most obviously, looking at reality can be a useful corrective for failures of imagination.) But it doesn’t follow that the propositions themselves are contingent.
Indeed, it’s easy to prove that there are necessary truths: just conditionalize out the contingencies, until you reach bedrock. That is, take some contingent truth P, and some complete description of C of the circumstances in which P would be true. Then the conditional “if C then P” is itself non-contingent.
Still, to the extent that they do converge on reliable truth finding mechanisms, they should converge on the same truth-finding mechanisms. And one’s admission that one’s own truth-finding mechanism is so heavily school-dependent would indeed be quite worrisome, as it indicates insufficient critical analysis of what one was taught.
Not sure how this engages with my challenge. The idea is that different schools might not all be converging on “reliable truth finding mechanisms”. Maybe only one is, and the rest are like (non-obvious) cults, in respect of their (non-obvious) unreliability. [I’m not suggesting that this is actually the case, but just that it’s a possibility that we need to consider, in order to tighten the arguments being presented here.] As the cult analogy shows, the contingency of our beliefs on our educational environment does not entail “insufficiently critical analysis of what one was taught”. So I’m wanting you guys to fill in the missing premises.
The problem with this is that the alternative epistemic position also thinks that your position is not as well-informed and well-supported as your own. Are they justified as well?
Yes, the key issue is not so much whether on a first analysis you came to think those other folks are not as well informed as you, but whether you would have thought that if you had been taught by them. The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better. Once you see that on a simple first analysis you would each think the other less informed, you must realize that the problem is harder than you had realized and you need to re-evaluate your reasons for so easily thinking they are wrong and you are right. Until you can find a style of analysis that would have convinced you, had you grown up among them, to convert to this side, it is hard to believe you’ve overcome this bias.
The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better.
Well, that’s one issue. But I was addressing a different—more theoretical—issue, namely, whether acknowledging the contingency of one’s beliefs (i.e. that one would have believed differently if raised differently) necessarily undermines epistemic justification.
(Recall the distinction between third-personal ‘accounts’ of rational justification and first-personal ‘instruction manuals’.)
“Necessarily” is an extremely strong claim, making it overwhelming likely that such a claim is false. So why ever would that be an interesting issue? And to me, first-person instruction manuals seem obviously more important than third-person “accounts”.
I get the impression that many (even most) of the commenters here think that acknowledged contingency thereby undermines a belief. But if you agree with me that this is much too quick, then we face the interesting problem of specifying exactly when acknowledged contingency undermines justification.
I don’t know what you mean by “important”. I would agree that the instruction manual question is obviously of greater practical importance, e.g. for those whose interest in the theory of rationality is merely instrumental. But to come up with an account of epistemic justification seems of equal or greater theoretical importance, to philosophers and others who have an intrinsic interest in the topic.
It’s also worth noting that the theoretical task could help inform the practical one. For example, the post on ‘skepticism and default trust’ (linked in my original comment) argues that some self-acknowledged ‘epistemic luck’ is necessary to avoid radical skepticism. This suggests a practical conclusion: if you hope to acquire any knowledge at all, your instruction manual will need to avoid being too averse to this outcome.
The vast majority of claims people make in ordinary language are best interpreted as on-average-tendency or all-else-equal claims; it almost never makes sense to interpret them as logical necessities. Why should this particular case be any different?
Well, they might be just as internally consistent (in some weak, subjective, sense). But if this kind of internal consistency or ratification suffices for justification, then there’s no “epistemic luck” involved after all. Both believers might know full well that their own views are self-endorsing.
I was instead thinking that self-ratifying principles were necessary for full justification. On top of that, it may just be a brute epistemic fact which of (say) occamism and anti-occamism is really justified. Then two people might have formally similar beliefs, and each sticks to their own guns in light of the other’s disagreement (which they view as a product of the other’s epistemic stunting), and yet only one of the two is actually right (justified) to do so. But that’s because only one of the two views was really justifiable in the first place: the actual disagreement may play no (or little) essential role, on this way of looking at things.
It seems pretty likely that there is some correlation. (Suppose, without loss of generality, that some kind of epistemic externalism is true. Then some schools—the ones where externalism is popular—correlate with the production of true beliefs about epistemology.) The problem is just that we don’t know in advance which schools actually have it right.
Perhaps what you mean to suggest is that going to school X isn’t a sensitive method of belief formation. Even if what it taught was false (like school Y), you would still end up believing it (just as the students of Y do).
Then again, one could say much the same thing about being born into a religious cult (or a different historical epoch). I do consider myself lucky to have avoided such an epistemically stunted upbringing. But this kind of luck does not in any way undermine my present beliefs.
In Meta-Coherence vs. Humble Convictions, I suggest that what matters is how you assess the alternative epistemic position. If you really think it is just as well-informed and well-supported as your own, then this should undermine your present beliefs. Otherwise, it need not. To consider oneself the beneficiary of epistemic luck is not necessarily irresponsible. (Indeed, it’s arguably necessary to avoid radical skepticism!)
Perhaps it would have been more accurate to say that your choice of school does not causally interact with the truth of its pet theory.
Philosophical truths don’t seem like the kinds of things that would causally interact with anything. (They don’t have causal powers, do they?)
ETA: why is this being downvoted?
Right, they would inferentially interact with them. Causal map:
Universe’s structure (plus some other stuff) --> Philosophical Truth
From observing the universe, you should make (some) change to your estimates of philosophical truths, but the truths don’t cause the universe—just the reverse.
What Alicorn was saying, I think, is that there’s no “my choice of school” node that points to (i.e. is a cause of) Philosophical Truth. Rather, such a node would at best point to “my beliefs”.
(And ideally, you’d want the universe’s structure to be a cause of your school’s theories...)
ETA: Related: Argument screens off authority. Silas summary: truth causally flows to proxies for the truth, sometimes through multiple intermediate stages. E.g. truth of a position causes good arguments for it which cause experts to believe it which cause good schools to teach it. But he closer you are to truth in the causal chain, the more you have screened off and made irrelevant.
Again, how does the ‘my choice of school’ node here differ from the ‘my not being born into a cult’ node? The latter doesn’t cause philosophical truths either. (Strictly speaking nothing does: only contingent things have causes, and philosophical truths aren’t contingent on how things turn out. But let’s put that aside for now.) What it does is provide me with habits of thought that do a better job of producing true beliefs than the mental habits I would have acquired if born into a cult. But then different schools of philosophy teach different habits of thought too (that’s why they reach different conclusions). The flaws in the other schools of thought are much less obvious than the flaws found in cults, but that’s just a difference in degree...
Right, it doesn’t. But they’re still going to be inferentially connected (d-connected in Judea Pearl’s terminology) because both a) your beliefs (if formed through a reliable process), and b) philosophical truths, will be caused by the same source.
And just a terminology issue: I was being a bit sloppy here, I admit. “X causes Y”, in the sense I was using it, means “the state of X is a cause of the state of Y”. So it would be technically correct but confusing to say, “Eating unhealthy foods causes long life”, because it means “Whether you eat unhealthy foods is a causal factor in whether you have a long life”.
Yes, I assumed that how philosophers define the terms, but a) I don’t find such a category useful because b) of all the instances where philosophers had to revise their first-principles derivations based on subtle assumptions about how the universe works.
I actually agree. Still, to the extent that they do converge on reliable truth finding mechanisms, they should converge on the same truth-finding mechanisms. And one’s admission that one’s own truth-finding mechanism is so heavily school-dependent would indeed be quite worrisome, as it indicates insufficient critical analysis of what one was taught.
Of course, merely being critical is insufficient (someone who said so in this discussion was rightfully modded down for such a simplistic solution). I would say that you additionally have to check that the things you learn are multiply and deeply connected to the rest of your model of the world, and not just some “dangling node”, immune to the onslaught of evidence from other fields.
This sounds like a metaphysics-epistemology confusion (or ‘territory-map confusion’, as folks around here might call it). It’s true that empirical information can cause us to revise our ‘a priori’ beliefs. (Most obviously, looking at reality can be a useful corrective for failures of imagination.) But it doesn’t follow that the propositions themselves are contingent.
Indeed, it’s easy to prove that there are necessary truths: just conditionalize out the contingencies, until you reach bedrock. That is, take some contingent truth P, and some complete description of C of the circumstances in which P would be true. Then the conditional “if C then P” is itself non-contingent.
Not sure how this engages with my challenge. The idea is that different schools might not all be converging on “reliable truth finding mechanisms”. Maybe only one is, and the rest are like (non-obvious) cults, in respect of their (non-obvious) unreliability. [I’m not suggesting that this is actually the case, but just that it’s a possibility that we need to consider, in order to tighten the arguments being presented here.] As the cult analogy shows, the contingency of our beliefs on our educational environment does not entail “insufficiently critical analysis of what one was taught”. So I’m wanting you guys to fill in the missing premises.
Well, ideally, they’d interact on some level with the arguments in their favor.
The problem with this is that the alternative epistemic position also thinks that your position is not as well-informed and well-supported as your own. Are they justified as well?
Yes, the key issue is not so much whether on a first analysis you came to think those other folks are not as well informed as you, but whether you would have thought that if you had been taught by them. The issue is how to overcome the numerous easy habits of assuming that what you were taught must have been better. Once you see that on a simple first analysis you would each think the other less informed, you must realize that the problem is harder than you had realized and you need to re-evaluate your reasons for so easily thinking they are wrong and you are right. Until you can find a style of analysis that would have convinced you, had you grown up among them, to convert to this side, it is hard to believe you’ve overcome this bias.
Robin Hanson just ended a post with the phrase “overcome bias.” This feels momentous, like theme music should be playing.
May I suggest the following?
http://www.youtube.com/watch?v=cSZ55X3X4pk
http://tvtropes.org/pmwiki/pmwiki.php/Main/TitleDrop
Well, that’s one issue. But I was addressing a different—more theoretical—issue, namely, whether acknowledging the contingency of one’s beliefs (i.e. that one would have believed differently if raised differently) necessarily undermines epistemic justification.
(Recall the distinction between third-personal ‘accounts’ of rational justification and first-personal ‘instruction manuals’.)
“Necessarily” is an extremely strong claim, making it overwhelming likely that such a claim is false. So why ever would that be an interesting issue? And to me, first-person instruction manuals seem obviously more important than third-person “accounts”.
I get the impression that many (even most) of the commenters here think that acknowledged contingency thereby undermines a belief. But if you agree with me that this is much too quick, then we face the interesting problem of specifying exactly when acknowledged contingency undermines justification.
I don’t know what you mean by “important”. I would agree that the instruction manual question is obviously of greater practical importance, e.g. for those whose interest in the theory of rationality is merely instrumental. But to come up with an account of epistemic justification seems of equal or greater theoretical importance, to philosophers and others who have an intrinsic interest in the topic.
It’s also worth noting that the theoretical task could help inform the practical one. For example, the post on ‘skepticism and default trust’ (linked in my original comment) argues that some self-acknowledged ‘epistemic luck’ is necessary to avoid radical skepticism. This suggests a practical conclusion: if you hope to acquire any knowledge at all, your instruction manual will need to avoid being too averse to this outcome.
The vast majority of claims people make in ordinary language are best interpreted as on-average-tendency or all-else-equal claims; it almost never makes sense to interpret them as logical necessities. Why should this particular case be any different?
Well, they might be just as internally consistent (in some weak, subjective, sense). But if this kind of internal consistency or ratification suffices for justification, then there’s no “epistemic luck” involved after all. Both believers might know full well that their own views are self-endorsing.
I was instead thinking that self-ratifying principles were necessary for full justification. On top of that, it may just be a brute epistemic fact which of (say) occamism and anti-occamism is really justified. Then two people might have formally similar beliefs, and each sticks to their own guns in light of the other’s disagreement (which they view as a product of the other’s epistemic stunting), and yet only one of the two is actually right (justified) to do so. But that’s because only one of the two views was really justifiable in the first place: the actual disagreement may play no (or little) essential role, on this way of looking at things.
For further background, see my discussion of Personal Bias and Peer Disagreement.