There is an entire chapter in Pearl’s Causality book devoted to the rabbit-hole of defining what ‘actual cause’ means. (Note: the definition given there doesn’t work, and there is a substantial literature discussing why and proposing fixes).
The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with ‘actual cause.’ As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
Maybe the concept of “infinity” is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.
But even that isn’t an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way)
Well, as I said, ‘actual cause’ appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story—or not. Concepts which have already been formalized include concepts which are both used colloquially in “everyday conversation” and precisely in physics (e.g. weight/mass).
One could argue that ‘actual cause’ is in some sense not a natural concept, but it’s still useful in the sense that formalizing the algorithm humans use to decide ‘actual cause’ problems can be useful for automating certain kinds of legal reasoning.
The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.
The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense “dual” to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.
In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother’s socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren’t.
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I “know” P in some sense?
Nonetheless, if I had to develop a coherent concept of “knowledge”, I don’t think that I’d go with “‘knowledge’ [is] any assignment of high probability to a proposition that turns out to be true.” The crucial question is, who is assigning the probability? If it’s my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)
But Getteir problems are usually about some third person’s knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they “know” those things that just happen to be true?
That is essentially what was going on in my example with Mr. X’s book. There, I’m the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn’t say that I know anything, even though I got this one thing right.
ETA: Gah. This is what I meant by “down the rabbit-hole”. These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.
ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.
I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject.
If you want to set your standard for knowledge this high, I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
If you want to set your standard for knowledge this high …
I’m not sure what you mean by a “standard for knowledge”. What standard for knowledge do you think that I have proposed?
I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
You’re talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of “knowledge” dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I’m not sure whether you agree with me there or not.)
But you certainly can evaluate someone else’s prior. I was trying to explain why “knowledge” becomes problematic in that situation. Do you disagree?
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn’t fit intuition at all to say I knew that it would rain. This is why the traditional definition is “justified true belief” and that is what Gettier subverts.
You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
Yes. An excellent illustration of ‘the Gettier rabbit-hole.’
There is an entire chapter in Pearl’s Causality book devoted to the rabbit-hole of defining what ‘actual cause’ means. (Note: the definition given there doesn’t work, and there is a substantial literature discussing why and proposing fixes).
The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with ‘actual cause.’ As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
Maybe the concept of “infinity” is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.
But even that isn’t an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
Thanks for the Causality heads-up.
Can you name an example or two?
Well, as I said, ‘actual cause’ appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story—or not. Concepts which have already been formalized include concepts which are both used colloquially in “everyday conversation” and precisely in physics (e.g. weight/mass).
One could argue that ‘actual cause’ is in some sense not a natural concept, but it’s still useful in the sense that formalizing the algorithm humans use to decide ‘actual cause’ problems can be useful for automating certain kinds of legal reasoning.
The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
Now, if only someone would give me a hand out of this rabbit-hole before I spend all morning in here ;).
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.
The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense “dual” to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.
In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother’s socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren’t.
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I “know” P in some sense?
Nonetheless, if I had to develop a coherent concept of “knowledge”, I don’t think that I’d go with “‘knowledge’ [is] any assignment of high probability to a proposition that turns out to be true.” The crucial question is, who is assigning the probability? If it’s my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)
But Getteir problems are usually about some third person’s knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they “know” those things that just happen to be true?
That is essentially what was going on in my example with Mr. X’s book. There, I’m the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn’t say that I know anything, even though I got this one thing right.
ETA: Gah. This is what I meant by “down the rabbit-hole”. These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.
ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.
If you want to set your standard for knowledge this high, I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
I’m not sure what you mean by a “standard for knowledge”. What standard for knowledge do you think that I have proposed?
You’re talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of “knowledge” dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I’m not sure whether you agree with me there or not.)
But you certainly can evaluate someone else’s prior. I was trying to explain why “knowledge” becomes problematic in that situation. Do you disagree?
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn’t fit intuition at all to say I knew that it would rain. This is why the traditional definition is “justified true belief” and that is what Gettier subverts.
You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
Here, let me introduce you to my friend Taboo...
;)