1) Have you read Gettier’s paper “Is Justified True Belief Knowledge?”? I recommended it; it seems to create problems for the JTB analysis of knowledge even assuming a Bayesian understanding of “justified.”
2) You’re misunderstanding the purpose of “true” in the JTB definition. It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true. As Eliezer would say, don’t confuse uncertainty in the map with uncertainty in the territory. Pick your favorite case of a scientific theory that was once well supported by the evidence, but turned out to be false. Back when available evidence supported it, did scientists know it was true?
1) Have you read Gettier’s paper “Is Justified True Belief Knowledge?”? I recommended it; it seems to create problems for the JTB analysis of knowledge even assuming a Bayesian understanding of “justified.”
As I argued in this comment from 2011, the intuitive reaction to the Gettier scenario is based on a probability-theoretic mistake analogous to the conjunction fallacy (you might call it the “disjunction fallacy”).
2) You’re misunderstanding the purpose of “true” in the JTB definition. It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
Yeah, but the trouble is that we don’t know if a non-tautological statement is true or not. ’S like we have some kind of uncertainty or incomplete information. So in order to evaluate what we know, it seems like rather than trying to make it depend on what’s true or not, we could use some kind of system for reasoning under uncertainty.
I don’t see the problem. Sure, we can’t establish with complete certainty whether some proposition is true. It would then follow that we can’t establish with complete certainty whether someone genuinely knows that proposition. But why require complete certainty for your knowledge claims? Just as our truth claims are uncertain and subject to revision, our knowledge claims are as well.
So whether I know (“probabilistic JTB”) something or not can depend on who’s doing the evaluating, and what information they have? This ranges pretty far from the platonic assumptions behind Gettier problems.
No, that doesn’t follow. Whether you know a proposition is an objective fact, just as the truth of a proposition is an objective fact. The probabilistic element is just that our judgments about knowledge are uncertain, just as our judgments about truth more generally are uncertain.
Example:
P: “Barack Obama is the American President.”
This is a statement that is very probably true, but I don’t assign it probability 1. Let’s say its probability (for me) is 0.9.
KP: “Manfred knows that Barack Obama is the American President.”
This statement assumes that P is in fact true. So the probability I assign this knowledge claim must be less than the probability I assign to P (assuming JTB). It must be less than 0.9. Now maybe someone else assigns a probability of 0.99 to P, in which case the probability they assign KP may well be greater than 0.9. So, yeah, the probabilities we attach to knowledge claims can depend on how much information we have. But that doesn’t change the fact that KP is objectively either true or false. The mere fact that different people assign different probabilities to KP based on the information they have doesn’t contradict this.
[NOTE: As a matter of fact, I don’t think KP is determinately either true or false. I think what we mean by “knowledge” varies by context, so the truth of KP may also vary by context. For this sort of reason, I think an epistemology focused on the concept of “knowledge” is a mistake. Still, this is a separate issue from whether JTB makes sense.]
Man, I can really see why arguing about this stuff produces lots of heat and little light. Sorry about not being very constructive. Yes, you’re right—there’s a decent way to translate “JTB” into probabilistic terms, which is to put a probability value on the T, assume that I B if my probability for a statement is above some threshold, and temporarily ignore the definition issues with J. Then you can assign a statement like KP the appropriate probability if my probability is above the threshold, and 0 if my probability is below the threshold.
It seems he considers that the statement ‘S knows that P’ can have only two possible values, true or false. This may have been a historical tradition within philosophy since Plato but it seems to rule out many usual usages of ‘knowledge’ such as ‘I know a little about that’.
As noted by Edwin Jaynes Bayesians usually consider knowledge in terms of probability:
In our terminology, a probability is something that we assign, in order to represent a state of knowledge.
In his great text on Bayesian inference, Probability theory: the logic of science, he demonstrates that Aristotelian logic is a limiting case of probability theory; The results of logic are the results of probability theory where the value of probabilities are restricted to only 0 and 1. I believe this probabilistic approach provides a richer context for knowledge in that there are degrees of certainty. My reworking of Plato’s definition attempted to transition it to this context.
Pick your favorite case of a scientific theory that was once well supported by the evidence, but turned out to be false. Back when available evidence supported it, did scientists know it was true?
Perhaps those scientist from the past should have said it had a high probability of being true. I may be misunderstanding you but I do not believe science can produce certainty and this seems to be a common view. I quote wikipedia.
A scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism.
You tried to define knowledge as simply ‘justified belief’. The example scientific theory was believed to be true, and that belief was justified by the evidence then available. But, as we now know, that belief was false. By your definition, however, they can still be said to have ‘known’ the theory was true.
That is the problem with the definition not including the ‘true’ caveat.
I reject the notion that any scientific theory can be known to be 100% true, I stated:
Perhaps those scientist from the past should have said it had a high probability of being true.
As we all know now Newton’s theory of gravitation is not 100% true and therefore in a logical sense it is not true at all. We have counter examples as in the shift of Mercury’s perihelion which it does not predict. However the theory is still a source of knowledge, it was used by NASA to get men to the moon.
Perhaps considering knowledge as an all or none characteristic is unhelpful.
If we accept that a theory must be true or certain in order to contain knowledge it seems to me that no scientific theory can contain knowledge. All scientific theories are falsifiable and therefore uncertain.
I also consider it hubris to think we might ever develop a ‘true’ scientific theory as I believe the complexities of reality are far beyond what we can now imagine. I expect however that we will continue to accumulate knowledge along the way.
No, Newton’s theory of gravitation does not provide knowledge. Belief in it is no longer justified; it contradicts the evidence now available.
However, prior to relativity, the existing evidence justified belief in Newton’s theory. Whether or not it justified 100% confidence is irrelevant; if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.
So, using the definition you gave, physicists everywhere (except one patent office in Switzerland) knew Newton’s theory to be true, because the belief “Newton’s theory is accurate” was justified. However, we now know it to be false.
Currently, we have a different theory of gravity. Belief in it is justified by the evidence. By your standard, we know it to be true. That’s patently ridiculous, however, since physicists still seek to expand or disprove it.
if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.
However I think your are misunderstanding me.
I don’t think we require 100% justified confidence for there to be knowledge I believe knowledge is always a probability and that scientific knowledge is always something less than 100%.
I suggest that knowledge is justified belief but it is always a probability less than 100%. As I wrote: I mean justified in the Bayesian sense which assigns a probability to a state of knowledge. The correct probability to assign may be calculated with the Bayesian update.
This is a common Bayesian interpretation. As Jaynes wrote:
In our terminology, a probability is something that we assign, in order to represent a state of knowledge.
I am fairly certain I understand your position better than you yourself do. You have eliminated the distinction between belief and knowledge entirely, thus rendering the word knowledge useless. Tabooing is not an argument; this conclusion is not valid.
You have repeatedly included in your argument false statements, even under your own interpretation. You have also misinterpreted quotes to back up your argument, such as misunderstanding the statement
In our terminology, a probability is something that we assign, in order to represent a state of knowledge.
To mean that knowledge is a probability, rather than the actual meaning of ‘probability quantifies how much we know that we do not know’.
You are in a state of confusion, though you may not have realized it, and I have no interest in continuing to point out the flawed foundations if you will ignore the demonstration. I am done here.
From what I can see, you’re arguing entirely over the definition of ‘knowledge’ instead of just splitting up individual concepts and giving them different names.
I basically agree, we are. What I’m trying to do is to maintain knowledge as a separate thing from belief. I don’t have particular attachment to this definition of knowledge (as pointed out above, “justified true belief” is a little simplistic), but I can’t find any way that jocko’s version is different from straight-up belief.
It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
I’m not sure I understand the difference. How is one supposed to have that information? I can imagine a proposition actually being true, but that’s about it.
ETA: From the deepest pit of the following comment thread:
The way I read the quote is:
A proposition being true doesn’t mean that it has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what the world is actually like, it is true.
This in turn could be read as:
A proposition being true doesn’t mean that is has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what someone knows about the world with probability of 1, it is true.
Your first reading seems OK to me. Actually, I don’t think it expresses the same thought as the quote you’re responding to, but it is a plausible implication of that thought.
I’m not sure how you move from the first reading to the second one, though. In fact, I don’t even understand the second reading, specifically this part:
and it coincides with what we know about the world with probability of 1
What do you mean when you say that the proposition “coincides with” what we know about the world? Do you just mean that the proposition expresses some aspect of our model of the world? But then how could it have probability 0.9 and yet our model have probability 1? That would be incoherent. But I can’t come up with any other interpretation of what you mean by “coincides with” here (or, for that matter what you mean by “know”, given that you’re rejecting a JTB type analysis). Help?
That’s what it’s trying to be. Could you provide an example how you would express the exact same thought with different words? I’d like to know if I’m attacking a strawman here.
What do you mean when you say that the proposition “coincides with” what we know about the world?
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim. Otherwise we’re just playing tricks with our imaginations. As I tried to express before, I can imagine a true territory out there, but since nobody can verify it being there, i.e. have a perfect map, it’s a pointless concept for the purposes we’re discussing here.
That would be incoherent.
I’m trying to convey why a particular notion of truth is incoherent, but I’m not sure we agree about that yet.
I’ve seen science types try to reinteprret mainstream philosophy in terns of probability and information several times, and it tends to go no where. Why not understand philosophy in its own terms?
Often, the inability to state something in a mathematically precise way is an indication that the underlying idea is not precisely defined. This isn’t universally true, but it is a useful heuristic.
Sure, but asking “can we take this idea and state it in terms of math” is a useful question. Moreover, for those aspects of philosophy where one can do, this this often results in it becoming much more clear what is going on. The raven problem is a good example of this: this is a problem that really is difficult to follow, but when one states what is happening in terms of probability, the “paradox” quickly goes away. And this is true not just in philosophy but in many areas of interest. In fact, one problem philosophy has (and part of why it has such a bad reputation) is that once an area is sufficiently precisely defined, which often takes math, it becomes its own field. Math itself broke off from philosophy very early on, and physics also pretty early, but more recent breakoffs were linguistics, economics, and psychology,
One way of thinking about the goals of philosophy is define things precisely enough that people stop calling that thing philosophy. And one of the most effective ways historically to do so is using mathematical tools to help.
Sure, but “It can’t be stated in a mathematical framework that already does a good job of answering a lot of these questions, maybe we should try to adopt it so it can be, or maybe we should conclude that the idea really is confused if we have other information indicating it has problems, or maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then” are not the same thing as just throwing an idea out because it isn’t mathematically precise.
I think in general that LW should pay more attention to mainstream philosophy. I find it interesting how often people on LW don’t realize how much of the standard positions here overlap with Quine’s positions, and he’s clearly mainstream. It is possible that people on LW overestimate the usefulness of the “can this be mathematicized?” question, but that doesn’t stop it from being a very useful question to ask.
Well, I’d argue that in essence, all of the alternative scenarios you list for dealing with non-mathematicized problems do constitute throwing an idea out, insofar as they represent a reshaping of the question by people who didn’t initially propose it, i.e., a type of misrepresentation, although the last one (“maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then”) is an adequate way to deal with such problems.
it’s a pointless concept for the purposes we’re discussing here.
Seems to me it’s not pointless, because your failure to understand it is clearly holding you back...
Why are you failing to distinguish between “P” and “a person claiming P”? They are distinct things. Snow being white has nothing to do with who or what thinks snow is white. And there’s no reason anyone needs a “perfect map” to talk about truth any more than a perfect map is needed to talk about snow being white.
It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
How would you interpret “actually being true” here? Say you have evidence for a proposition that makes it 0.9 probable. How would you establish that the proposition is also true? (Understand that I’m not saying you should.)
If you have evidence that makes P 90% probable, then your evidence has established a 90% chance of P being true (which is to say, you are uncertain whether P is true or not, but you assign 90% of your probability mass to “P is true”, and 10% to “P is false”). The definition of “truth” that makes this work is very simple: let “P” and “P is true” be synonymous.
Perhaps. For the purposes of ‘knowledge’, whether or not you actually have knowledge of X depends on whether or not X is true, so knowledge is dependent on more than just your state of mind.
Someone upthread asked how you can “possibly have” the information that X is true, and in a sense you can’t, you can only get more certain of it.
How confident was that “perhaps”? Manfred seemed to agree with me that something fishy is going on. Pragmatist then steelmanned the JTB position by approaching it probabilistically.
I’m not interested in steelmanning these philosophers, I’m interested in what they actually think. Isn’t that the point of this series?
The ‘perhaps’ was more about whether you’d find it nonsensical or not. Some people do, some don’t. (For once, we actually have some related data about this, because knowledge has been a favorite subject of experimental philosophers. I’d have to look up some more studies/an analysis to be sure, but IIRC subjects were much more likely to accept the Gettier counterexamples as legitimate knowledge than philosophers).
True belief is so easily obtained that you can arrive at it by lucky guesses.
Justification is difficult.
Certain justification—certainty is about justification, not accuracy—is harder still, and may be impossible.
Whether you can have information that X is true depends on whether “information” means belief, justification, knowledge or something else.
Skeptics about knowledge tend to see truth as peerfect justification. Non-sceptics tend to see truth as an out-of-the-mind correspondence with thte world.
Certainty is usually not considered necessary for justification. Some very few people do, but there are plenty of skeptics who are making the stronger claim that we don’t have significant justification, not simply that we don’t have certainty
Half the people in a room believe, for no particular reason, that extraterrestrial life exists. The other half disbelieve it. Some of them will be right, but none of them know, because they have no systematic justifaction for their beliefs.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
Does the above question make sense to you? It doesn’t make sense to me.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
That is the realist (and, I think, common sense) attitude: that beliefs are rendered true by correspondence to chunks of reality.
Interpreting the meaning of “is true” and establishing that something “is true” are two different things—namely, semantics and epistemology. It’s common in science to sidestep semantic questions with operational answers, but that doesn’t necessarily work in other areas.
So if you agree about that, why are you saying things like
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim.
How is the “if” connected to the “then” of that sentence? Your thinking isn’t making any sense to me.
Don’t you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true? If you agree with this, then you acknowledge a difference between truth and assigning probability 1.
As for how one is supposed to have information about a proposition being actually true—through evidence causally associated with the truth of the proposition. This doesn’t mean that the evidence needs to be sufficient to raise one’s probability assignment all the way to 1. Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I’m not 100% certain about the truth of this proposition (although I’m pretty close).
Don’t you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true?
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I’m not 100% certain about the truth of this proposition (although I’m pretty close).
What would it mean to establish the knowledge that this proposition is actually true?
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
No, it doesn’t. I mean, any proposition to which I assign a non-extremal probability could be assigned a higher probability if I look for more evidence. So that criterion doesn’t pick out a useful class of propositions.
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
No. There are propositions which one can (rationally) believe with high probability given the available evidence that are nonetheless false.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards. The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false. When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
What would it mean to establish the knowledge that this proposition is actually true?
If you mean establish it with absolute certainty, then I don’t think that’s possible. If you mean establish it with a high degree of confidence, then it would just amount to gathering a large amount of evidence that confirms the proposition.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards.
Not really. If you can’t establish what truth is, then probability obviously can’t be an expression of your beliefs in relation to truth.
The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next. Why do induction and empiricism work? Because they have worked before, not because you’re presupposing a true world out there.
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
That’s why we need axioms. It seems to me axioms are not the kind of truth that JTB presupposes. I’m not saying we don’t need mathematical truths or axioms that are agreed upon. I’m saying that presupposing the true territory out there doesn’t add anything to the process of probabilistic reasoning.
If you mean establish it with absolute certainty, then I don’t think that’s possible.
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
I don’t know what you mean by “platonic truth”. I suspect you are thinking of something much more metaphysically freighted than necessary. The kind of truth I’m talking about (and I think most people are talking about when they say “truth”) very much can be established. For instance, I can establish what the truth is about the capital of Latvia by looking up Latvia on Wikipedia. I just did, and established the truth of the proposition “The capital of Latvia is Riga.” Sure this doesn’t establish the truth with 100% certainty, but why should that be the standard for truth being a useful notion?
Truth is not something you need God-like noumenal superpowers to determine. It’s something that can be determined with the very human superpowers of empirical investigation and theory-building.
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next.
I assign probabilities to past events, to empirically indistinguishable scientific hypotheses, to events that are in principle unobservable for me. Am I just doing it wrong, in your opinion?
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
What kind of a notion of truth? The kind that requires absolute certainty? But I’m not aware of anyone arguing that one needs that kind of truth for the JTB account, or to make sense of probabilistic reasoning. Why do you think that kind of notion of truth is needed?
I’m not arguing for any kind of notion of truth. I thought the kind of notion of truth JTB seems to be assuming is confusing as hell, and I wanted clarification for what it was trying to say.
My objection started from here:
2) You’re misunderstanding the purpose of “true” in the JTB definition. It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
Can you get back to that, because I don’t understand you anymore?
OK, I guess we were talking past each other. What is it about that particular claim that you find objectionable? I thought what you were objecting to was the notion that a proposition being true is distinct from it being assigned probability 1, and I was responding to that. But are you objecting to something else?
Is your objection just that you don’t understand what people mean by “true” in the JTB account? I don’t think they’re committed to any particular notion, except for the claim that justification and truth are distinct. A belief can be highly justified and yet false, or not at all justified and yet true. Pretty much any of the theories discussed here would work. My personal preference is deflationism.
ETA: I posted this also on the top of this comment thread, so you can answer there if you wish.
The way I read the quote is:
A proposition being true doesn’t mean that it has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what the world is actually like, it is true.
This in turn could be read as:
A proposition being true doesn’t mean that is has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what we know about the world with probability of 1, it is true.
Do you now understand my objection? I predict it’s based on some grave misunderstanding. Thanks for the link, I’ll try to check it out when I have more time.
Two points:
1) Have you read Gettier’s paper “Is Justified True Belief Knowledge?”? I recommended it; it seems to create problems for the JTB analysis of knowledge even assuming a Bayesian understanding of “justified.”
2) You’re misunderstanding the purpose of “true” in the JTB definition. It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true. As Eliezer would say, don’t confuse uncertainty in the map with uncertainty in the territory. Pick your favorite case of a scientific theory that was once well supported by the evidence, but turned out to be false. Back when available evidence supported it, did scientists know it was true?
As I argued in this comment from 2011, the intuitive reaction to the Gettier scenario is based on a probability-theoretic mistake analogous to the conjunction fallacy (you might call it the “disjunction fallacy”).
Yeah, but the trouble is that we don’t know if a non-tautological statement is true or not. ’S like we have some kind of uncertainty or incomplete information. So in order to evaluate what we know, it seems like rather than trying to make it depend on what’s true or not, we could use some kind of system for reasoning under uncertainty.
I don’t see the problem. Sure, we can’t establish with complete certainty whether some proposition is true. It would then follow that we can’t establish with complete certainty whether someone genuinely knows that proposition. But why require complete certainty for your knowledge claims? Just as our truth claims are uncertain and subject to revision, our knowledge claims are as well.
So whether I know (“probabilistic JTB”) something or not can depend on who’s doing the evaluating, and what information they have? This ranges pretty far from the platonic assumptions behind Gettier problems.
No, that doesn’t follow. Whether you know a proposition is an objective fact, just as the truth of a proposition is an objective fact. The probabilistic element is just that our judgments about knowledge are uncertain, just as our judgments about truth more generally are uncertain.
Example:
P: “Barack Obama is the American President.”
This is a statement that is very probably true, but I don’t assign it probability 1. Let’s say its probability (for me) is 0.9.
KP: “Manfred knows that Barack Obama is the American President.”
This statement assumes that P is in fact true. So the probability I assign this knowledge claim must be less than the probability I assign to P (assuming JTB). It must be less than 0.9. Now maybe someone else assigns a probability of 0.99 to P, in which case the probability they assign KP may well be greater than 0.9. So, yeah, the probabilities we attach to knowledge claims can depend on how much information we have. But that doesn’t change the fact that KP is objectively either true or false. The mere fact that different people assign different probabilities to KP based on the information they have doesn’t contradict this.
[NOTE: As a matter of fact, I don’t think KP is determinately either true or false. I think what we mean by “knowledge” varies by context, so the truth of KP may also vary by context. For this sort of reason, I think an epistemology focused on the concept of “knowledge” is a mistake. Still, this is a separate issue from whether JTB makes sense.]
Man, I can really see why arguing about this stuff produces lots of heat and little light. Sorry about not being very constructive. Yes, you’re right—there’s a decent way to translate “JTB” into probabilistic terms, which is to put a probability value on the T, assume that I B if my probability for a statement is above some threshold, and temporarily ignore the definition issues with J. Then you can assign a statement like KP the appropriate probability if my probability is above the threshold, and 0 if my probability is below the threshold.
See my latest comment in our pet thread. I think this illustrates the problem with:
There’s no such thing as an objective fact yet discovered (excluding tautologies, perhaps).
Thanks for the link to Gettier’s paper.
It seems he considers that the statement ‘S knows that P’ can have only two possible values, true or false. This may have been a historical tradition within philosophy since Plato but it seems to rule out many usual usages of ‘knowledge’ such as ‘I know a little about that’.
As noted by Edwin Jaynes Bayesians usually consider knowledge in terms of probability:
In his great text on Bayesian inference, Probability theory: the logic of science, he demonstrates that Aristotelian logic is a limiting case of probability theory; The results of logic are the results of probability theory where the value of probabilities are restricted to only 0 and 1. I believe this probabilistic approach provides a richer context for knowledge in that there are degrees of certainty. My reworking of Plato’s definition attempted to transition it to this context.
Perhaps those scientist from the past should have said it had a high probability of being true. I may be misunderstanding you but I do not believe science can produce certainty and this seems to be a common view. I quote wikipedia.
You tried to define knowledge as simply ‘justified belief’. The example scientific theory was believed to be true, and that belief was justified by the evidence then available. But, as we now know, that belief was false. By your definition, however, they can still be said to have ‘known’ the theory was true.
That is the problem with the definition not including the ‘true’ caveat.
You misunderstand me. I did not say it was
I reject the notion that any scientific theory can be known to be 100% true, I stated:
As we all know now Newton’s theory of gravitation is not 100% true and therefore in a logical sense it is not true at all. We have counter examples as in the shift of Mercury’s perihelion which it does not predict. However the theory is still a source of knowledge, it was used by NASA to get men to the moon.
Perhaps considering knowledge as an all or none characteristic is unhelpful.
If we accept that a theory must be true or certain in order to contain knowledge it seems to me that no scientific theory can contain knowledge. All scientific theories are falsifiable and therefore uncertain.
I also consider it hubris to think we might ever develop a ‘true’ scientific theory as I believe the complexities of reality are far beyond what we can now imagine. I expect however that we will continue to accumulate knowledge along the way.
No, Newton’s theory of gravitation does not provide knowledge. Belief in it is no longer justified; it contradicts the evidence now available.
However, prior to relativity, the existing evidence justified belief in Newton’s theory. Whether or not it justified 100% confidence is irrelevant; if we require 100% justified confidence to consider something knowledge, no one knows or can know a single thing.
So, using the definition you gave, physicists everywhere (except one patent office in Switzerland) knew Newton’s theory to be true, because the belief “Newton’s theory is accurate” was justified. However, we now know it to be false.
Currently, we have a different theory of gravity. Belief in it is justified by the evidence. By your standard, we know it to be true. That’s patently ridiculous, however, since physicists still seek to expand or disprove it.
I agree with your statement:
However I think your are misunderstanding me.
I don’t think we require 100% justified confidence for there to be knowledge I believe knowledge is always a probability and that scientific knowledge is always something less than 100%.
I suggest that knowledge is justified belief but it is always a probability less than 100%. As I wrote: I mean justified in the Bayesian sense which assigns a probability to a state of knowledge. The correct probability to assign may be calculated with the Bayesian update.
This is a common Bayesian interpretation. As Jaynes wrote:
I am fairly certain I understand your position better than you yourself do. You have eliminated the distinction between belief and knowledge entirely, thus rendering the word knowledge useless. Tabooing is not an argument; this conclusion is not valid.
You have repeatedly included in your argument false statements, even under your own interpretation. You have also misinterpreted quotes to back up your argument, such as misunderstanding the statement
To mean that knowledge is a probability, rather than the actual meaning of ‘probability quantifies how much we know that we do not know’.
You are in a state of confusion, though you may not have realized it, and I have no interest in continuing to point out the flawed foundations if you will ignore the demonstration. I am done here.
From what I can see, you’re arguing entirely over the definition of ‘knowledge’ instead of just splitting up individual concepts and giving them different names.
I basically agree, we are. What I’m trying to do is to maintain knowledge as a separate thing from belief. I don’t have particular attachment to this definition of knowledge (as pointed out above, “justified true belief” is a little simplistic), but I can’t find any way that jocko’s version is different from straight-up belief.
I’m not sure I understand the difference. How is one supposed to have that information? I can imagine a proposition actually being true, but that’s about it.
ETA: From the deepest pit of the following comment thread:
The way I read the quote is:
This in turn could be read as:
Your first reading seems OK to me. Actually, I don’t think it expresses the same thought as the quote you’re responding to, but it is a plausible implication of that thought.
I’m not sure how you move from the first reading to the second one, though. In fact, I don’t even understand the second reading, specifically this part:
What do you mean when you say that the proposition “coincides with” what we know about the world? Do you just mean that the proposition expresses some aspect of our model of the world? But then how could it have probability 0.9 and yet our model have probability 1? That would be incoherent. But I can’t come up with any other interpretation of what you mean by “coincides with” here (or, for that matter what you mean by “know”, given that you’re rejecting a JTB type analysis). Help?
That’s what it’s trying to be. Could you provide an example how you would express the exact same thought with different words? I’d like to know if I’m attacking a strawman here.
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim. Otherwise we’re just playing tricks with our imaginations. As I tried to express before, I can imagine a true territory out there, but since nobody can verify it being there, i.e. have a perfect map, it’s a pointless concept for the purposes we’re discussing here.
I’m trying to convey why a particular notion of truth is incoherent, but I’m not sure we agree about that yet.
Would the model still be 100% accurate if there were a label on P saying “only 90% certain”.?
Why don’t you read the paper and try how that fits yourself, and then ask yourself, is this really what they intend?
I’ve read Gettier’s famous apper, a long time ago, and he doesn’t disuss models or probabilities.
Do you think it can be understood in a probabilistic framework, or will that just yield nonsense?
I’ve seen science types try to reinteprret mainstream philosophy in terns of probability and information several times, and it tends to go no where. Why not understand philosophy in its own terms?
Often, the inability to state something in a mathematically precise way is an indication that the underlying idea is not precisely defined. This isn’t universally true, but it is a useful heuristic.
Hardly anything is mathematically precise. It’s not new that philosophy isn’t either.
Sure, but asking “can we take this idea and state it in terms of math” is a useful question. Moreover, for those aspects of philosophy where one can do, this this often results in it becoming much more clear what is going on. The raven problem is a good example of this: this is a problem that really is difficult to follow, but when one states what is happening in terms of probability, the “paradox” quickly goes away. And this is true not just in philosophy but in many areas of interest. In fact, one problem philosophy has (and part of why it has such a bad reputation) is that once an area is sufficiently precisely defined, which often takes math, it becomes its own field. Math itself broke off from philosophy very early on, and physics also pretty early, but more recent breakoffs were linguistics, economics, and psychology,
One way of thinking about the goals of philosophy is define things precisely enough that people stop calling that thing philosophy. And one of the most effective ways historically to do so is using mathematical tools to help.
“It can’t be stated in terms of maths, so throw it out” is not useful.
Sure, but “It can’t be stated in a mathematical framework that already does a good job of answering a lot of these questions, maybe we should try to adopt it so it can be, or maybe we should conclude that the idea really is confused if we have other information indicating it has problems, or maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then” are not the same thing as just throwing an idea out because it isn’t mathematically precise.
I think in general that LW should pay more attention to mainstream philosophy. I find it interesting how often people on LW don’t realize how much of the standard positions here overlap with Quine’s positions, and he’s clearly mainstream. It is possible that people on LW overestimate the usefulness of the “can this be mathematicized?” question, but that doesn’t stop it from being a very useful question to ask.
Well, I’d argue that in essence, all of the alternative scenarios you list for dealing with non-mathematicized problems do constitute throwing an idea out, insofar as they represent a reshaping of the question by people who didn’t initially propose it, i.e., a type of misrepresentation, although the last one (“maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then”) is an adequate way to deal with such problems.
Seems to me it’s not pointless, because your failure to understand it is clearly holding you back...
Why are you failing to distinguish between “P” and “a person claiming P”? They are distinct things. Snow being white has nothing to do with who or what thinks snow is white. And there’s no reason anyone needs a “perfect map” to talk about truth any more than a perfect map is needed to talk about snow being white.
Quoting Chris:
How would you interpret “actually being true” here? Say you have evidence for a proposition that makes it 0.9 probable. How would you establish that the proposition is also true? (Understand that I’m not saying you should.)
If you have evidence that makes P 90% probable, then your evidence has established a 90% chance of P being true (which is to say, you are uncertain whether P is true or not, but you assign 90% of your probability mass to “P is true”, and 10% to “P is false”). The definition of “truth” that makes this work is very simple: let “P” and “P is true” be synonymous.
I agree with you here completely. I was just wondering if particular philosophers had something more nonsensical in mind.
Perhaps. For the purposes of ‘knowledge’, whether or not you actually have knowledge of X depends on whether or not X is true, so knowledge is dependent on more than just your state of mind.
Someone upthread asked how you can “possibly have” the information that X is true, and in a sense you can’t, you can only get more certain of it.
Did any of that help?
I think that someone was me :)
How confident was that “perhaps”? Manfred seemed to agree with me that something fishy is going on. Pragmatist then steelmanned the JTB position by approaching it probabilistically.
I’m not interested in steelmanning these philosophers, I’m interested in what they actually think. Isn’t that the point of this series?
The ‘perhaps’ was more about whether you’d find it nonsensical or not. Some people do, some don’t. (For once, we actually have some related data about this, because knowledge has been a favorite subject of experimental philosophers. I’d have to look up some more studies/an analysis to be sure, but IIRC subjects were much more likely to accept the Gettier counterexamples as legitimate knowledge than philosophers).
True belief is so easily obtained that you can arrive at it by lucky guesses. Justification is difficult. Certain justification—certainty is about justification, not accuracy—is harder still, and may be impossible. Whether you can have information that X is true depends on whether “information” means belief, justification, knowledge or something else. Skeptics about knowledge tend to see truth as peerfect justification. Non-sceptics tend to see truth as an out-of-the-mind correspondence with thte world.
Certainty is usually not considered necessary for justification. Some very few people do, but there are plenty of skeptics who are making the stronger claim that we don’t have significant justification, not simply that we don’t have certainty
Please expand. Give us an example.
Half the people in a room believe, for no particular reason, that extraterrestrial life exists. The other half disbelieve it. Some of them will be right, but none of them know, because they have no systematic justifaction for their beliefs.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
Does the above question make sense to you? It doesn’t make sense to me.
That is the realist (and, I think, common sense) attitude: that beliefs are rendered true by correspondence to chunks of reality.
Yes. I don’t assume truth has to be in the head.
If science is falsifiable and therefore uncertain is any of it true? If not then I assume JTB must judge “scientific knowledge” to be an oxymoron.
If some scientific knowledge is true does that mean that the theory will not be revised, extended or corrected in the next 1,000 years?
Does truth apply to science? If not should “true” be included in our definition of knowledge?
The JTB per se does not say justificaiton must be certain
Interpreting the meaning of “is true” and establishing that something “is true” are two different things—namely, semantics and epistemology. It’s common in science to sidestep semantic questions with operational answers, but that doesn’t necessarily work in other areas.
Can you give more examples of such sidestepping where it doesn’t work?
It’s more a case of noting that there is no reason for it to work everywhere, and no evidene that it works outside of special cases.
I’m not, I know they’re distinct things. It seems to me you misundertood me. What’s with the tone?
I know that.
So if you agree about that, why are you saying things like
How is the “if” connected to the “then” of that sentence? Your thinking isn’t making any sense to me.
That quote shouldn’t make sense to you, and it’s not my thinking. Keep in mind I’m not endorsing a notion of truth here, I’m questioning it.
White and snow wouldn’t exist without someone thinking about them so I’m not sure what you’re trying to say here.
What goes on in mountains when no-one is thinking about them...?
I actually had this particular failure mode in mind when I was reponding to you. But let’s not go there, it’s not important.
Don’t you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true? If you agree with this, then you acknowledge a difference between truth and assigning probability 1.
As for how one is supposed to have information about a proposition being actually true—through evidence causally associated with the truth of the proposition. This doesn’t mean that the evidence needs to be sufficient to raise one’s probability assignment all the way to 1. Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I’m not 100% certain about the truth of this proposition (although I’m pretty close).
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
What would it mean to establish the knowledge that this proposition is actually true?
No, it doesn’t. I mean, any proposition to which I assign a non-extremal probability could be assigned a higher probability if I look for more evidence. So that criterion doesn’t pick out a useful class of propositions.
No. There are propositions which one can (rationally) believe with high probability given the available evidence that are nonetheless false.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards. The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false. When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
If you mean establish it with absolute certainty, then I don’t think that’s possible. If you mean establish it with a high degree of confidence, then it would just amount to gathering a large amount of evidence that confirms the proposition.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
Not really. If you can’t establish what truth is, then probability obviously can’t be an expression of your beliefs in relation to truth.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next. Why do induction and empiricism work? Because they have worked before, not because you’re presupposing a true world out there.
That’s why we need axioms. It seems to me axioms are not the kind of truth that JTB presupposes. I’m not saying we don’t need mathematical truths or axioms that are agreed upon. I’m saying that presupposing the true territory out there doesn’t add anything to the process of probabilistic reasoning.
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
I agree.
I don’t know what you mean by “platonic truth”. I suspect you are thinking of something much more metaphysically freighted than necessary. The kind of truth I’m talking about (and I think most people are talking about when they say “truth”) very much can be established. For instance, I can establish what the truth is about the capital of Latvia by looking up Latvia on Wikipedia. I just did, and established the truth of the proposition “The capital of Latvia is Riga.” Sure this doesn’t establish the truth with 100% certainty, but why should that be the standard for truth being a useful notion?
Truth is not something you need God-like noumenal superpowers to determine. It’s something that can be determined with the very human superpowers of empirical investigation and theory-building.
I assign probabilities to past events, to empirically indistinguishable scientific hypotheses, to events that are in principle unobservable for me. Am I just doing it wrong, in your opinion?
What kind of a notion of truth? The kind that requires absolute certainty? But I’m not aware of anyone arguing that one needs that kind of truth for the JTB account, or to make sense of probabilistic reasoning. Why do you think that kind of notion of truth is needed?
I’m not arguing for any kind of notion of truth. I thought the kind of notion of truth JTB seems to be assuming is confusing as hell, and I wanted clarification for what it was trying to say.
My objection started from here:
Can you get back to that, because I don’t understand you anymore?
OK, I guess we were talking past each other. What is it about that particular claim that you find objectionable? I thought what you were objecting to was the notion that a proposition being true is distinct from it being assigned probability 1, and I was responding to that. But are you objecting to something else?
Is your objection just that you don’t understand what people mean by “true” in the JTB account? I don’t think they’re committed to any particular notion, except for the claim that justification and truth are distinct. A belief can be highly justified and yet false, or not at all justified and yet true. Pretty much any of the theories discussed here would work. My personal preference is deflationism.
ETA: I posted this also on the top of this comment thread, so you can answer there if you wish.
The way I read the quote is:
This in turn could be read as:
Do you now understand my objection? I predict it’s based on some grave misunderstanding. Thanks for the link, I’ll try to check it out when I have more time.