An interesting phenomenon I’ve noticed recently is that sometimes words do have short exact definitions that exactly coincide with common usage and intuition. For example, after Gettier scenarios ruined the definition of knowledge as “Justified true belief”, philosophers found a new definition:
“A belief in X is knowledge if one would always have that belief whenever X, and never have it whenever not-X”.
(where “always” and “never” are defined to be some appropriate significance level)
Now it seems to me that this definition completely nails it. There’s not one scenario I can find where this definition doesn’t return the correct answer. (EDIT: Wrong! See great-grandchild by Tyrrell McAllister) I now feel very silly for saying things like “‘Knowledge’ is a fuzzy concept, hard to carve out of thingspace, there’s is always going to be some scenario that breaks your definition.” It turns out that it had a nice definition all along.
It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not. All you’ve got to do to write the definition is to find this algorithm.
Yep. Another case in point of the danger of replying, “Tell me how you define X, and I’ll tell you the answer” is Parfit in Reason and Persons concluding that whether or not an atom-by-atom duplicate constructed from you is “you” depends on how you define “you”. Actually it turns out that there is a definite answer and the answer is knowably yes, because everything Parfit reasoned about “indexical identity” is sheer physical nonsense in a world built on configurations and amplitudes instead of Newtonian billiard balls.
PS: Very Tarskian and Bayesian of them, but are you sure they didn’t say, “A belief in X is knowledge if one would never have it whenever not-X”?
I’m thinking of Robert Nozick’s definition. He states his definition thus:
P is true
S believes that P
If it were the case that (not-P), S would not believe that P
If it were the case that P, S would believe that P
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
For example, here is a counterexample to Nozick’s definition as you present it. Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book’s author, a Mr. X, is a congenital liar. He invented almost every claim in the book out of whole cloth, with no regard for the truth of the matter. There was only one exception. There is one matter on which Mr. X is constitutionally compelled to write and to write truthfully: the color of his mother’s socks on the day of his birth. At one point in B, Mr. X writes that his mother was wearing blue socks when she gave birth to him. This claim was scrupulously researched and is true. However, there is nothing in the text of B to indicate that Mr. X treated this claim any differently from all the invented claims in the book.
In this story, I am S, and P is “Mr. X’s mother was wearing blue socks when she gave birth to him.” Then:
P is true. (Mr. X’s mother really was wearing blue socks.)
S believes that P. (Mr. X claimed P in B, and I believe everything in B.)
If it were the case that (not-P), S would not believe that P. (Mr. X only claimed P in B because that was what his scrupulous research revealed. Had P not been true, Mr. X’s research would not have led him to believe it. And, since he is incapable of lying about this matter, he would not have put P in B. Therefore, since I don’t believe anything not in B, I would not have come to believe P.)
If it were the case that P, S would believe that P. (Mr. X was constitutionally compelled to write truthfully about what the color of his mother’s socks were when he was born. In all possible worlds in which his mother wore blue socks, Mr. X’s scrupulous research would have discovered it, and Mr. X would have reported it in B, where I would have read it, and so believed it.)
And yet, the intuitions on which Gettier problems play would say that I don’t know P. I just believe P because it was in a certain book, but I have no rational reason to trust anything in that book.
ETA: And here’s a counterexample from the other direction — that is, an example of knowledge that fails to meet Nozick’s criteria.
Suppose that you sit before an upside-down cup, under which there is a ping-pong ball that has been painted some color. Your job is to learn the color of the ping-pong ball.
You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.
Suppose that, when you flip the coin, it comes up heads. Accordingly, you look at the ping-pong ball and see that it is red. Intuitively, we would say that you know that the ping-pong ball is red.
Nonetheless, we fail to meet Nozick’s criterion 4. Had the coin come up tails, you would not have lifted the cup, so you would not have come to believe that the ball is red, even if this were still true.
There is an entire chapter in Pearl’s Causality book devoted to the rabbit-hole of defining what ‘actual cause’ means. (Note: the definition given there doesn’t work, and there is a substantial literature discussing why and proposing fixes).
The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with ‘actual cause.’ As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
Maybe the concept of “infinity” is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.
But even that isn’t an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way)
Well, as I said, ‘actual cause’ appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story—or not. Concepts which have already been formalized include concepts which are both used colloquially in “everyday conversation” and precisely in physics (e.g. weight/mass).
One could argue that ‘actual cause’ is in some sense not a natural concept, but it’s still useful in the sense that formalizing the algorithm humans use to decide ‘actual cause’ problems can be useful for automating certain kinds of legal reasoning.
The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.
The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense “dual” to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.
In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother’s socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren’t.
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I “know” P in some sense?
Nonetheless, if I had to develop a coherent concept of “knowledge”, I don’t think that I’d go with “‘knowledge’ [is] any assignment of high probability to a proposition that turns out to be true.” The crucial question is, who is assigning the probability? If it’s my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)
But Getteir problems are usually about some third person’s knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they “know” those things that just happen to be true?
That is essentially what was going on in my example with Mr. X’s book. There, I’m the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn’t say that I know anything, even though I got this one thing right.
ETA: Gah. This is what I meant by “down the rabbit-hole”. These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.
ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.
I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject.
If you want to set your standard for knowledge this high, I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
If you want to set your standard for knowledge this high …
I’m not sure what you mean by a “standard for knowledge”. What standard for knowledge do you think that I have proposed?
I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
You’re talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of “knowledge” dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I’m not sure whether you agree with me there or not.)
But you certainly can evaluate someone else’s prior. I was trying to explain why “knowledge” becomes problematic in that situation. Do you disagree?
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn’t fit intuition at all to say I knew that it would rain. This is why the traditional definition is “justified true belief” and that is what Gettier subverts.
You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
That’s a very interesting thought. I wonder what leads you to it.
With the caveat that I have not read all of this thread:
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.
It may just be simply that non-technical, common terms like ‘vehicle’ and ‘knowledge’ (and of course others like ‘table’) can’t be conceptually analyzed.
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
That’s a very interesting thought. I wonder what leads you to it.
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:
When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)
Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.
Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.
Despite the low probability of success for any given seeker, it’s still good that there are a few people out there pursuing a solution.
But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.
Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.
So, when I called the Gettier problem “dangerous”, I just meant that, for most people, it doesn’t make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form “Agent X knows Y”. If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.
On the other hand, I’m fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.
Both of your Gettier scenarios appear to confirm Nozick’s criteria 3 and 4 when the criteria are understood as criteria for a belief-creation strategy to be considered a knowledge-creation strategy applicable to a context outside of the contrived scenario. Taking your scenarios one by one.
Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book’s author, a Mr. X, is a congenital liar.
You have described the strategy of believing everything written in a certain book B. This strategy fails to conform to Nozick’s criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.
You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.
There are actually two strategies described here, and one of them is followed conditional on events occurring in the implementation of the other. The outer strategy is to flip the coin to decide whether to look at the ball. The inner strategy is to look at the ball. The inner strategy conforms to Nozick’s criteria 3 and 4, and therefore (if we apply the criteria) is a knowledge creation strategy.
In both cases, the intuitive results you describe appear to conform to Nozick’s criteria 3 and 4 understood as described in the first paragraph. Nozick’s criteria 3 and 4 (understood as above) appear moreover to play a key role in making sense of our intuitive judgment in both the scenarios. That is, it strikes me as intuitive that the reason we don’t count the belief about the socks as knowledge is that it is the fruit of a strategy which, as a general strategy, appears to us to violate criteria 3 and 4 wildly, and only happens to satisfy them in a particular highly contrived context. And similarly, it strikes me as intuitive that we accept the belief about the color as knowledge because we are confident that the method of looking at the ball is a method which strongly satisfies criteria 3 and 4.
This strategy fails to conform to Nozick’s criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.
The problem with conversations about definitions is that we want our definitions to work perfectly even in the least convenient possible world.
So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.
On second thought, don’t imagine that. For that is exactly the train of thought that leads to wasting time on thinking about the Getteir problem ;).
So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.
A large part of what was highly contrived was your selection of a particular true, honest, well-researched sentence in a book otherwise filled with lies, precisely because it is so unusual. In order to make it not contrived, we must suppose something like, the book has no lies, the book is all truth. Or we might even need to suppose that every sentence in every book is the truth. In such a world, then the contrivedness of the selection of a true sentence is minimized.
So let us imagine ourselves into a world in which every sentence in every book is true. And now we imagine someone who selects a book and believes everything in it. In this world, this strategy, generalized (to pick a random book and believe everything in it) becomes a reliable way to generate true belief. In such a world, I think it would be arguable to call such a strategy a genuine knowledge-creation strategy. In any case, it would depart so radically from your scenario (since in your scenario everything in the book other than that one fact is a lie) that it’s not at all clear how it would relate to your scenario.
I’m not sure that I’m seeing your point. Are you saying that
One shouldn’t waste time on trying to concoct exceptionless definitions — “exceptionless” in the sense that they fit our intuitions in every single conceivable scenario. In particular, we shouldn’t worry about “contrived” scenarios. If a definition works in the non-contrived cases, that’s good enough.
… or are you saying that
Nozick’s definition really is exceptionless. In every conceivable scenario, and for every single proposition P, every instance of someone “knowing” that P would conform to every one of Nozick’s criteria (and conversely).
Nozick apparently intended his definition to apply to single beliefs. I applied it to belief-creating strategies (or procedures, methods, mechanisms) rather than to individual beliefs. These strategies are to be evaluated in terms of their overall results if applied widely. Then I noticed that your two Gettier scenarios involved strategies which, respectively, violated and conformed to the definition as I applied it.
I thought it sounded contrived at first, but then remembered there are tons of people who pick a book and believe everything they read in it, reaching many false conclusions and a few true ones.
I always thought the “if it were the case” thing was just a way of sweeping the knowledge problem under the rug by restricting counterexamples to “plausible” things that “would happen”. It gives the appearance of a definition of knowledge, while simply moving the problem into the “plausibility” box (which you need to use your knowledge to evaluate).
I’m not sure it’s useful to try to define a binary account of knowledge anyway though. People just don’t work like that.
A different objection, following Eliezer’s PS, is that:
Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.
Or more formally: If I know P, then I know (P or Q) for all Q, but:
Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.
This is a more realistic, and hence better, version of the counterexample that I gave in my ETA to this comment.
(3) If it were the case that (not-P), S would not believe that P
(4) If it were the case that P, S would believe that P
I’m genuinely surprised. Condition 4 seems blatantly unnecessary and I had thought analytic philosophers (and Nozick in particular) more competent than that. Am I missing something?
Your hunch is right. Starting on page 179 of Nozick’s Philosophical explanations, he address counterexamples like the one that Will Sawin proposed. In response, he gives a modified version of his criteria. As near as I can tell, my first counterexample still breaks it, though.
Yes. In the next post, I’ll be naming some definitions for moral terms that should be thrown out, for example those which rest on false assumptions about reality (e.g. “God exists.”)
It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not.
I don’t think the brain usually makes this determination by looking at things that are much like definitions.
I think this isn’t the usual sense of ‘knowledge’. It’s too definite. Do I know there’s a website called less wrong, for example? Not for sure. It might have ceased to exist while I’m typing this—I have no present confirmation. And of course any confirmation only lasts as long as you look at it.
Knowledge is that state where one can make predictions about a subject which are better than chance. Of course this definition has its own flaws, doubtless....
An interesting phenomenon I’ve noticed recently is that sometimes words do have short exact definitions that exactly coincide with common usage and intuition. For example, after Gettier scenarios ruined the definition of knowledge as “Justified true belief”, philosophers found a new definition:
(where “always” and “never” are defined to be some appropriate significance level)
Now it seems to me that this definition completely nails it. There’s not one scenario I can find where this definition doesn’t return the correct answer. (EDIT: Wrong! See great-grandchild by Tyrrell McAllister) I now feel very silly for saying things like “‘Knowledge’ is a fuzzy concept, hard to carve out of thingspace, there’s is always going to be some scenario that breaks your definition.” It turns out that it had a nice definition all along.
It seems like there is a reason why words tend to have short definitions: the brain can only run short algorithms to determine whether an instance falls into the category or not. All you’ve got to do to write the definition is to find this algorithm.
Yep. Another case in point of the danger of replying, “Tell me how you define X, and I’ll tell you the answer” is Parfit in Reason and Persons concluding that whether or not an atom-by-atom duplicate constructed from you is “you” depends on how you define “you”. Actually it turns out that there is a definite answer and the answer is knowably yes, because everything Parfit reasoned about “indexical identity” is sheer physical nonsense in a world built on configurations and amplitudes instead of Newtonian billiard balls.
PS: Very Tarskian and Bayesian of them, but are you sure they didn’t say, “A belief in X is knowledge if one would never have it whenever not-X”?
I’m thinking of Robert Nozick’s definition. He states his definition thus:
P is true
S believes that P
If it were the case that (not-P), S would not believe that P
If it were the case that P, S would believe that P
(I failed to remember condition 1, since 2 & 3 ⇒ 1 anyway)
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
For example, here is a counterexample to Nozick’s definition as you present it. Suppose that I have irrationally decided to believe everything written in a certain book B and to believe nothing not written in B. Unfortunately for me, the book’s author, a Mr. X, is a congenital liar. He invented almost every claim in the book out of whole cloth, with no regard for the truth of the matter. There was only one exception. There is one matter on which Mr. X is constitutionally compelled to write and to write truthfully: the color of his mother’s socks on the day of his birth. At one point in B, Mr. X writes that his mother was wearing blue socks when she gave birth to him. This claim was scrupulously researched and is true. However, there is nothing in the text of B to indicate that Mr. X treated this claim any differently from all the invented claims in the book.
In this story, I am S, and P is “Mr. X’s mother was wearing blue socks when she gave birth to him.” Then:
P is true. (Mr. X’s mother really was wearing blue socks.)
S believes that P. (Mr. X claimed P in B, and I believe everything in B.)
If it were the case that (not-P), S would not believe that P. (Mr. X only claimed P in B because that was what his scrupulous research revealed. Had P not been true, Mr. X’s research would not have led him to believe it. And, since he is incapable of lying about this matter, he would not have put P in B. Therefore, since I don’t believe anything not in B, I would not have come to believe P.)
If it were the case that P, S would believe that P. (Mr. X was constitutionally compelled to write truthfully about what the color of his mother’s socks were when he was born. In all possible worlds in which his mother wore blue socks, Mr. X’s scrupulous research would have discovered it, and Mr. X would have reported it in B, where I would have read it, and so believed it.)
And yet, the intuitions on which Gettier problems play would say that I don’t know P. I just believe P because it was in a certain book, but I have no rational reason to trust anything in that book.
ETA: And here’s a counterexample from the other direction — that is, an example of knowledge that fails to meet Nozick’s criteria.
Suppose that you sit before an upside-down cup, under which there is a ping-pong ball that has been painted some color. Your job is to learn the color of the ping-pong ball.
You employ the following strategy: You flip a coin. If the coin comes up heads, you lift up the cup and look at the ping-pong ball, noting its color. If the coin comes up tails, you just give up and go with the ignorance prior.
Suppose that, when you flip the coin, it comes up heads. Accordingly, you look at the ping-pong ball and see that it is red. Intuitively, we would say that you know that the ping-pong ball is red.
Nonetheless, we fail to meet Nozick’s criterion 4. Had the coin come up tails, you would not have lifted the cup, so you would not have come to believe that the ball is red, even if this were still true.
Wham! Okay, I’m reverted to my old position. “Knowledge” is a fuzzy word.
ETA: Or at least a position of uncertainty. I need to research how counterfactuals work.
Yes. An excellent illustration of ‘the Gettier rabbit-hole.’
There is an entire chapter in Pearl’s Causality book devoted to the rabbit-hole of defining what ‘actual cause’ means. (Note: the definition given there doesn’t work, and there is a substantial literature discussing why and proposing fixes).
The counterargument to your post is that some seemingly fuzzy concepts actually have perfect intuitive consensus (e.g. almost everyone will classify any example as either concept X or not concept X the same way). This seems to be the case with ‘actual cause.’ As long as intuitive consensus continues to hold, the argument goes, there is hope of a concise logical description of it.
Maybe the concept of “infinity” is a sort of success story. People said all sorts of confused and incompatible things about infinity for millennia. Then finally Cantor found a way to work with it sensibly. His approach proved to be robust enough to survive essentially unchanged even after the abandonment of naive set theory.
But even that isn’t an example of philosophers solving a problem with conceptual analysis in the sense of the OP.
Thanks for the Causality heads-up.
Can you name an example or two?
Well, as I said, ‘actual cause’ appears to be one example. The literature is full of little causal stories where most people agree that something is an actual cause of something else in the story—or not. Concepts which have already been formalized include concepts which are both used colloquially in “everyday conversation” and precisely in physics (e.g. weight/mass).
One could argue that ‘actual cause’ is in some sense not a natural concept, but it’s still useful in the sense that formalizing the algorithm humans use to decide ‘actual cause’ problems can be useful for automating certain kinds of legal reasoning.
The Cyc project is a (probably doomed) example of a rabbit-hole project to construct an ontology of common sense. Lenat has been in that rabbit-hole for 27 years now.
Now, if only someone would give me a hand out of this rabbit-hole before I spend all morning in here ;).
Well, of course Bayesianism is your friend here. Probability theory elegantly supersedes the qualitative concepts of “knowledge”, “belief” and “justification” and, together with an understanding of heuristics and biases, nicely dissolves Gettier problems, so that we can safely call “knowledge” any assignment of high probability to a proposition that turns out to be true.
For example, take the original Gettier scenario. Since Jones has 10 coins in his pocket, P(man with 10 coins gets job) is bounded from below by P(Jones gets job). Hence any information that raises P(Jones gets job) necessarily raises P(man with 10 coins gets job) to something even higher, regardless of whether (Jones gets job) turns out to be true.
The psychological difficulty here is the counterintuitiveness of the rule P(A or B) >= P(A), and is in a sense “dual” to the conjunction fallacy. Just as one has to remember to subtract probability as burdensome details are introduced, one also has to remember to add probability as the reference class is broadened. When Smith learns the information suggesting Jones is the favored candidate, it may not feel like he is learning information about the set of all people with 10 coins in their pocket, but he is.
In your example of the book by Mr. X, we can observe that, because Mr. X was constitutionally compelled to write truthfully about his mother’s socks, your belief about that is legitimately entangled with reality, even if your other beliefs aren’t.
I agree that, with regard to my own knowledge, I should just determine the probability that I assign to a proposition P. Once I conclude that P has a high probability of being true, why should I care whether, in addition, I “know” P in some sense?
Nonetheless, if I had to develop a coherent concept of “knowledge”, I don’t think that I’d go with “‘knowledge’ [is] any assignment of high probability to a proposition that turns out to be true.” The crucial question is, who is assigning the probability? If it’s my assignment, then, as I said, I agree that, for me, the question about knowledge dissolves. (More generally, the question dissolves if the assignment was made according to my prior and my cognitive strategies.)
But Getteir problems are usually about some third person’s knowledge. When do you say that they know something? Suppose that, by your lights, they have a hopelessly screwed-up prior — say, an anti-Laplacian prior. So, they assign high probability to all sorts of stupid things for no good reason. Nonetheless, they have enough beliefs so that there are some things to which they assign high probability that turn out to be true. Would you really want to say that they “know” those things that just happen to be true?
That is essentially what was going on in my example with Mr. X’s book. There, I’m the third person. I have the stupid prior that says that everything in B is true and everything not in B is false. Now, you know that Mr. X is constitutionally compelled to write truthfully about his mother’s socks. So you know that reading B will legitimately entangle my beliefs with reality on that one solitary subject. But I don’t know that fact about Mr. X. I just believe everything in B. You know that my cognitive strategy will give me reliable knowledge on this one subject. But, intuitively, my epistemic state seems so screw-up that you shouldn’t say that I know anything, even though I got this one thing right.
ETA: Gah. This is what I meant by “down the rabbit-hole”. These kinds of conversations are just too fun :). I look forward to your reply, but it will be at least a day before I reply in turn.
ETA: Okay, just one more thing. I just wanted to say that I agree with your approach to the original Gettier problem with the coins.
If you want to set your standard for knowledge this high, I would argue that you’re claiming nothing counts as knowledge since no one has any way to tell how good their priors are independently of their priors.
I’m not sure what you mean by a “standard for knowledge”. What standard for knowledge do you think that I have proposed?
You’re talking about someone trying to determine whether their own beliefs count as knowledge. I already said that the question of “knowledge” dissolves in that case. All that they should care about are the probabilities that they assign to propositions. (I’m not sure whether you agree with me there or not.)
But you certainly can evaluate someone else’s prior. I was trying to explain why “knowledge” becomes problematic in that situation. Do you disagree?
I think that while what you define carves out a nice lump of thingspace, it fails to capture the intuitive meaning of the word probability. If I guess randomly that it will rain tomorrow and turn out to be right, then it doesn’t fit intuition at all to say I knew that it would rain. This is why the traditional definition is “justified true belief” and that is what Gettier subverts.
You presumably already know all this. The point is that Tyrrell McAllister is trying (to avoid trying) to give a concise summary of the common usage of the word knowledge, rather than to give a definition that is actually useful for doing probability or solving problems.
Here, let me introduce you to my friend Taboo...
;)
That’s a very interesting thought. I wonder what leads you to it.
With the caveat that I have not read all of this thread:
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.
It may just be simply that non-technical, common terms like ‘vehicle’ and ‘knowledge’ (and of course others like ‘table’) can’t be conceptually analyzed.
Also, experimental philosophy could be relevant to this discussion.
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:
When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)
Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.
Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.
Despite the low probability of success for any given seeker, it’s still good that there are a few people out there pursuing a solution.
But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.
Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.
So, when I called the Gettier problem “dangerous”, I just meant that, for most people, it doesn’t make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.
Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form “Agent X knows Y”. If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.
On the other hand, I’m fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.
Both of your Gettier scenarios appear to confirm Nozick’s criteria 3 and 4 when the criteria are understood as criteria for a belief-creation strategy to be considered a knowledge-creation strategy applicable to a context outside of the contrived scenario. Taking your scenarios one by one.
You have described the strategy of believing everything written in a certain book B. This strategy fails to conform to Nozick’s criteria 3 and 4 when considered outside of the contrived scenario in which the author is compelled to tell the truth about the socks, and therefore (if we apply the criteria) is not a knowledge creation strategy.
There are actually two strategies described here, and one of them is followed conditional on events occurring in the implementation of the other. The outer strategy is to flip the coin to decide whether to look at the ball. The inner strategy is to look at the ball. The inner strategy conforms to Nozick’s criteria 3 and 4, and therefore (if we apply the criteria) is a knowledge creation strategy.
In both cases, the intuitive results you describe appear to conform to Nozick’s criteria 3 and 4 understood as described in the first paragraph. Nozick’s criteria 3 and 4 (understood as above) appear moreover to play a key role in making sense of our intuitive judgment in both the scenarios. That is, it strikes me as intuitive that the reason we don’t count the belief about the socks as knowledge is that it is the fruit of a strategy which, as a general strategy, appears to us to violate criteria 3 and 4 wildly, and only happens to satisfy them in a particular highly contrived context. And similarly, it strikes me as intuitive that we accept the belief about the color as knowledge because we are confident that the method of looking at the ball is a method which strongly satisfies criteria 3 and 4.
The problem with conversations about definitions is that we want our definitions to work perfectly even in the least convenient possible world.
So imagine that, as a third-person observer, you know enough to see that the scenario is not highly contrived — that it is in fact a logical consequence of some relatively simple assumptions about the nature of reality. Suppose that, for you, the whole scenario is in fact highly probable.
On second thought, don’t imagine that. For that is exactly the train of thought that leads to wasting time on thinking about the Getteir problem ;).
A large part of what was highly contrived was your selection of a particular true, honest, well-researched sentence in a book otherwise filled with lies, precisely because it is so unusual. In order to make it not contrived, we must suppose something like, the book has no lies, the book is all truth. Or we might even need to suppose that every sentence in every book is the truth. In such a world, then the contrivedness of the selection of a true sentence is minimized.
So let us imagine ourselves into a world in which every sentence in every book is true. And now we imagine someone who selects a book and believes everything in it. In this world, this strategy, generalized (to pick a random book and believe everything in it) becomes a reliable way to generate true belief. In such a world, I think it would be arguable to call such a strategy a genuine knowledge-creation strategy. In any case, it would depart so radically from your scenario (since in your scenario everything in the book other than that one fact is a lie) that it’s not at all clear how it would relate to your scenario.
I’m not sure that I’m seeing your point. Are you saying that
One shouldn’t waste time on trying to concoct exceptionless definitions — “exceptionless” in the sense that they fit our intuitions in every single conceivable scenario. In particular, we shouldn’t worry about “contrived” scenarios. If a definition works in the non-contrived cases, that’s good enough.
… or are you saying that
Nozick’s definition really is exceptionless. In every conceivable scenario, and for every single proposition P, every instance of someone “knowing” that P would conform to every one of Nozick’s criteria (and conversely).
… or are you saying something else?
Nozick apparently intended his definition to apply to single beliefs. I applied it to belief-creating strategies (or procedures, methods, mechanisms) rather than to individual beliefs. These strategies are to be evaluated in terms of their overall results if applied widely. Then I noticed that your two Gettier scenarios involved strategies which, respectively, violated and conformed to the definition as I applied it.
That’s all. I am not drawing conclusions (yet).
I’m reminded of the Golden Rule. Since I would like if everyone would execute “if (I am Jiro) then rob”, I should execute that as well.
It’s actually pretty hard to define what it means for a strategy to be exceptionless, and it may be subject to a grue/bleen paradox.
I thought it sounded contrived at first, but then remembered there are tons of people who pick a book and believe everything they read in it, reaching many false conclusions and a few true ones.
I always thought the “if it were the case” thing was just a way of sweeping the knowledge problem under the rug by restricting counterexamples to “plausible” things that “would happen”. It gives the appearance of a definition of knowledge, while simply moving the problem into the “plausibility” box (which you need to use your knowledge to evaluate).
I’m not sure it’s useful to try to define a binary account of knowledge anyway though. People just don’t work like that.
A different objection, following Eliezer’s PS, is that:
Between me and a red box, there is a wall with a hole. I see the red box through the hole, and therefore know that the box is red. I reason, however, that I might have instead chosen to sit somewhere else, and I would not have been able to see the red box through the hole, and would not believe that the box is red.
Or more formally: If I know P, then I know (P or Q) for all Q, but:
P ⇒ Believes (P)
does not imply
(P v Q) ⇒ Believes (P v Q)
This is a more realistic, and hence better, version of the counterexample that I gave in my ETA to this comment.
I’m genuinely surprised. Condition 4 seems blatantly unnecessary and I had thought analytic philosophers (and Nozick in particular) more competent than that. Am I missing something?
Your hunch is right. Starting on page 179 of Nozick’s Philosophical explanations, he address counterexamples like the one that Will Sawin proposed. In response, he gives a modified version of his criteria. As near as I can tell, my first counterexample still breaks it, though.
Yes. In the next post, I’ll be naming some definitions for moral terms that should be thrown out, for example those which rest on false assumptions about reality (e.g. “God exists.”)
I don’t think the brain usually makes this determination by looking at things that are much like definitions.
I think this isn’t the usual sense of ‘knowledge’. It’s too definite. Do I know there’s a website called less wrong, for example? Not for sure. It might have ceased to exist while I’m typing this—I have no present confirmation. And of course any confirmation only lasts as long as you look at it.
Knowledge is that state where one can make predictions about a subject which are better than chance. Of course this definition has its own flaws, doubtless....