There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
That’s a very interesting thought. I wonder what leads you to it.
With the caveat that I have not read all of this thread:
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.
It may just be simply that non-technical, common terms like ‘vehicle’ and ‘knowledge’ (and of course others like ‘table’) can’t be conceptually analyzed.
There is a reason why the Gettier rabbit-hole is so dangerous. You can always cook up an improbable counterexample to any definition.
That’s a very interesting thought. I wonder what leads you to it.
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:
When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)
Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.
Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.
Despite the low probability of success for any given seeker, it’s still good that there are a few people out there pursuing a solution.
But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.
Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.
So, when I called the Gettier problem “dangerous”, I just meant that, for most people, it doesn’t make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form “Agent X knows Y”. If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.
On the other hand, I’m fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.
That’s a very interesting thought. I wonder what leads you to it.
With the caveat that I have not read all of this thread:
*Are you basing this on the fact that so far, all attempts at analysis have proven futile? (If so, maybe we need to come up with more robust conditions.)
*Do you think that the concept of ‘knowledge’ is inherently vague similar (but not identical) to the way terms like ‘tall’ and ‘bald’ are?
*Do you suspect that there may be no fact of the matter about what ‘knowledge’ is, just like there is no fact of the matter about the baldness of the present King of France? (If so, then how do the competent speakers apply the verb ‘to know’ so well?)
If we could say with confidence that conceptual analysis of knowledge is a futile effort, I think that would be progress. And of course the interesting question would be why.
It may just be simply that non-technical, common terms like ‘vehicle’ and ‘knowledge’ (and of course others like ‘table’) can’t be conceptually analyzed.
Also, experimental philosophy could be relevant to this discussion.
Let me expand on my comment a little: Thinking about the Gettier problem is dangerous in the same sense in which looking for a direct proof of the Goldbach conjecture is dangerous. These two activities share the following features:
When the problem was first posed, it was definitely worth looking for solutions. One could reasonably hope for success. (It would have been pretty nice if someone had found a solution to the Gettier problem within a year of its being posed.)
Now that the problem has been worked on for a long time by very smart people, you should assign very low probability to your own efforts succeeding.
Working on the problem can be addictive to certain kinds of people, in the sense that they will feel a strong urge to sink far more work into the problem than their probability of success can justify.
Despite the low probability of success for any given seeker, it’s still good that there are a few people out there pursuing a solution.
But the rest of us should spend on our time on other things, aside from the occasional recreational jab at the problem, perhaps.
Besides, any resolution of the problem will probably result from powerful techniques arising in some unforeseen quarter. A direct frontal assault will probably not solve the problem.
So, when I called the Gettier problem “dangerous”, I just meant that, for most people, it doesn’t make sense to spend much time on it, because they will almost certainly fail, but some of us (including me) might find it too strong a temptation to resist.
Contemporary English-speakers must be implementing some finite algorithm when they decide whether their intuitions are happy with a claim of the form “Agent X knows Y”. If someone wrote down that algorithm, I suppose that you could call it a solution to the Gettier problem. But I expect that the algorithm, as written, would look to us like a description of some inscrutably complex neurological process. It would not look like a piece of 20th century analytic philosophy.
On the other hand, I’m fairly confident that some piece of philosophy text could dissolve the problem. In short, we may be persuaded to abandon the intuitions that lie at the root of the Gettier problem. We may decide to stop trying to use those intuitions to guide what we say about epistemic agents.