I don’t see the problem. Sure, we can’t establish with complete certainty whether some proposition is true. It would then follow that we can’t establish with complete certainty whether someone genuinely knows that proposition. But why require complete certainty for your knowledge claims? Just as our truth claims are uncertain and subject to revision, our knowledge claims are as well.
So whether I know (“probabilistic JTB”) something or not can depend on who’s doing the evaluating, and what information they have? This ranges pretty far from the platonic assumptions behind Gettier problems.
No, that doesn’t follow. Whether you know a proposition is an objective fact, just as the truth of a proposition is an objective fact. The probabilistic element is just that our judgments about knowledge are uncertain, just as our judgments about truth more generally are uncertain.
Example:
P: “Barack Obama is the American President.”
This is a statement that is very probably true, but I don’t assign it probability 1. Let’s say its probability (for me) is 0.9.
KP: “Manfred knows that Barack Obama is the American President.”
This statement assumes that P is in fact true. So the probability I assign this knowledge claim must be less than the probability I assign to P (assuming JTB). It must be less than 0.9. Now maybe someone else assigns a probability of 0.99 to P, in which case the probability they assign KP may well be greater than 0.9. So, yeah, the probabilities we attach to knowledge claims can depend on how much information we have. But that doesn’t change the fact that KP is objectively either true or false. The mere fact that different people assign different probabilities to KP based on the information they have doesn’t contradict this.
[NOTE: As a matter of fact, I don’t think KP is determinately either true or false. I think what we mean by “knowledge” varies by context, so the truth of KP may also vary by context. For this sort of reason, I think an epistemology focused on the concept of “knowledge” is a mistake. Still, this is a separate issue from whether JTB makes sense.]
Man, I can really see why arguing about this stuff produces lots of heat and little light. Sorry about not being very constructive. Yes, you’re right—there’s a decent way to translate “JTB” into probabilistic terms, which is to put a probability value on the T, assume that I B if my probability for a statement is above some threshold, and temporarily ignore the definition issues with J. Then you can assign a statement like KP the appropriate probability if my probability is above the threshold, and 0 if my probability is below the threshold.
I don’t see the problem. Sure, we can’t establish with complete certainty whether some proposition is true. It would then follow that we can’t establish with complete certainty whether someone genuinely knows that proposition. But why require complete certainty for your knowledge claims? Just as our truth claims are uncertain and subject to revision, our knowledge claims are as well.
So whether I know (“probabilistic JTB”) something or not can depend on who’s doing the evaluating, and what information they have? This ranges pretty far from the platonic assumptions behind Gettier problems.
No, that doesn’t follow. Whether you know a proposition is an objective fact, just as the truth of a proposition is an objective fact. The probabilistic element is just that our judgments about knowledge are uncertain, just as our judgments about truth more generally are uncertain.
Example:
P: “Barack Obama is the American President.”
This is a statement that is very probably true, but I don’t assign it probability 1. Let’s say its probability (for me) is 0.9.
KP: “Manfred knows that Barack Obama is the American President.”
This statement assumes that P is in fact true. So the probability I assign this knowledge claim must be less than the probability I assign to P (assuming JTB). It must be less than 0.9. Now maybe someone else assigns a probability of 0.99 to P, in which case the probability they assign KP may well be greater than 0.9. So, yeah, the probabilities we attach to knowledge claims can depend on how much information we have. But that doesn’t change the fact that KP is objectively either true or false. The mere fact that different people assign different probabilities to KP based on the information they have doesn’t contradict this.
[NOTE: As a matter of fact, I don’t think KP is determinately either true or false. I think what we mean by “knowledge” varies by context, so the truth of KP may also vary by context. For this sort of reason, I think an epistemology focused on the concept of “knowledge” is a mistake. Still, this is a separate issue from whether JTB makes sense.]
Man, I can really see why arguing about this stuff produces lots of heat and little light. Sorry about not being very constructive. Yes, you’re right—there’s a decent way to translate “JTB” into probabilistic terms, which is to put a probability value on the T, assume that I B if my probability for a statement is above some threshold, and temporarily ignore the definition issues with J. Then you can assign a statement like KP the appropriate probability if my probability is above the threshold, and 0 if my probability is below the threshold.
See my latest comment in our pet thread. I think this illustrates the problem with:
There’s no such thing as an objective fact yet discovered (excluding tautologies, perhaps).