Don’t you agree that you (and in fact all of us) assign probability less than 1 to many propositions that are in fact true?
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
Assuming it is true that Barack Obama is currently the President of the United States, I have lots of evidence providing me information of this truth. Yet I’m not 100% certain about the truth of this proposition (although I’m pretty close).
What would it mean to establish the knowledge that this proposition is actually true?
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
No, it doesn’t. I mean, any proposition to which I assign a non-extremal probability could be assigned a higher probability if I look for more evidence. So that criterion doesn’t pick out a useful class of propositions.
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
No. There are propositions which one can (rationally) believe with high probability given the available evidence that are nonetheless false.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards. The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false. When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
What would it mean to establish the knowledge that this proposition is actually true?
If you mean establish it with absolute certainty, then I don’t think that’s possible. If you mean establish it with a high degree of confidence, then it would just amount to gathering a large amount of evidence that confirms the proposition.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards.
Not really. If you can’t establish what truth is, then probability obviously can’t be an expression of your beliefs in relation to truth.
The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next. Why do induction and empiricism work? Because they have worked before, not because you’re presupposing a true world out there.
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
That’s why we need axioms. It seems to me axioms are not the kind of truth that JTB presupposes. I’m not saying we don’t need mathematical truths or axioms that are agreed upon. I’m saying that presupposing the true territory out there doesn’t add anything to the process of probabilistic reasoning.
If you mean establish it with absolute certainty, then I don’t think that’s possible.
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
I don’t know what you mean by “platonic truth”. I suspect you are thinking of something much more metaphysically freighted than necessary. The kind of truth I’m talking about (and I think most people are talking about when they say “truth”) very much can be established. For instance, I can establish what the truth is about the capital of Latvia by looking up Latvia on Wikipedia. I just did, and established the truth of the proposition “The capital of Latvia is Riga.” Sure this doesn’t establish the truth with 100% certainty, but why should that be the standard for truth being a useful notion?
Truth is not something you need God-like noumenal superpowers to determine. It’s something that can be determined with the very human superpowers of empirical investigation and theory-building.
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next.
I assign probabilities to past events, to empirically indistinguishable scientific hypotheses, to events that are in principle unobservable for me. Am I just doing it wrong, in your opinion?
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
What kind of a notion of truth? The kind that requires absolute certainty? But I’m not aware of anyone arguing that one needs that kind of truth for the JTB account, or to make sense of probabilistic reasoning. Why do you think that kind of notion of truth is needed?
I’m not arguing for any kind of notion of truth. I thought the kind of notion of truth JTB seems to be assuming is confusing as hell, and I wanted clarification for what it was trying to say.
My objection started from here:
2) You’re misunderstanding the purpose of “true” in the JTB definition. It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
Can you get back to that, because I don’t understand you anymore?
OK, I guess we were talking past each other. What is it about that particular claim that you find objectionable? I thought what you were objecting to was the notion that a proposition being true is distinct from it being assigned probability 1, and I was responding to that. But are you objecting to something else?
Is your objection just that you don’t understand what people mean by “true” in the JTB account? I don’t think they’re committed to any particular notion, except for the claim that justification and truth are distinct. A belief can be highly justified and yet false, or not at all justified and yet true. Pretty much any of the theories discussed here would work. My personal preference is deflationism.
ETA: I posted this also on the top of this comment thread, so you can answer there if you wish.
The way I read the quote is:
A proposition being true doesn’t mean that it has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what the world is actually like, it is true.
This in turn could be read as:
A proposition being true doesn’t mean that is has the probability of 1. It does however mean that if a proposition is assigned a probability of 0.9, and it coincides with what we know about the world with probability of 1, it is true.
Do you now understand my objection? I predict it’s based on some grave misunderstanding. Thanks for the link, I’ll try to check it out when I have more time.
I believe that many propositions I assign reasonable probability to could be assigned a much higher probability if I was inclined to look for more evidence. Does that mean those propositions are “actually true”?
Are you saying that truth is anything it’s possible to believe with high probability given the evidence that can be acquired?
What would it mean to establish the knowledge that this proposition is actually true?
No, it doesn’t. I mean, any proposition to which I assign a non-extremal probability could be assigned a higher probability if I look for more evidence. So that criterion doesn’t pick out a useful class of propositions.
No. There are propositions which one can (rationally) believe with high probability given the available evidence that are nonetheless false.
I think the problem with what you’re doing is that you’re trying to analyze truth in terms of probability assignment. That’s backwards. The whole business of assigning probabilities to statements presupposes a notion of truth, of statements being true or false. When I say that I assign a probability of 0.6 to a particular proposition, I’m expressing my uncertainty about the truth of the proposition, or the odds at which I’d take a bet that the statement is true (or, more operationally, that any evidence obtained in the future will be statistically consistent with the truth of the statement).
So to even talk coherently about the significance of probability assignments, you need to talk about truth. If you now try to define truth itself in terms of probability assignments, you end up with vicious circularity.
If you mean establish it with absolute certainty, then I don’t think that’s possible. If you mean establish it with a high degree of confidence, then it would just amount to gathering a large amount of evidence that confirms the proposition.
There’s no difference between establishing the proposition P (e.g. establishing that Barack Obama is President), and establishing that the proposition P is actually true (e.g. establishing that “Barack Obama is President” is a true statement). If you know how to do the former, then you know how to do the latter. Adding “is actually true” at the end doesn’t produce any new epistemic requirements.
Not really. If you can’t establish what truth is, then probability obviously can’t be an expression of your beliefs in relation to truth.
The business of assigning probabilities presupposes that you can have some trust in induction, not that there has to be some platonic truth out there. Such a notion of truth is useless, because you can never establish what that truth is.
I’d say probability is more of an expression of your previous experiences, and how they can be used to predict what comes next. Why do induction and empiricism work? Because they have worked before, not because you’re presupposing a true world out there.
That’s why we need axioms. It seems to me axioms are not the kind of truth that JTB presupposes. I’m not saying we don’t need mathematical truths or axioms that are agreed upon. I’m saying that presupposing the true territory out there doesn’t add anything to the process of probabilistic reasoning.
That’s what I mean, and that’s what you would need if you think having that kind of a notion of truth is needed for probabilistic reasoning.
I agree.
I don’t know what you mean by “platonic truth”. I suspect you are thinking of something much more metaphysically freighted than necessary. The kind of truth I’m talking about (and I think most people are talking about when they say “truth”) very much can be established. For instance, I can establish what the truth is about the capital of Latvia by looking up Latvia on Wikipedia. I just did, and established the truth of the proposition “The capital of Latvia is Riga.” Sure this doesn’t establish the truth with 100% certainty, but why should that be the standard for truth being a useful notion?
Truth is not something you need God-like noumenal superpowers to determine. It’s something that can be determined with the very human superpowers of empirical investigation and theory-building.
I assign probabilities to past events, to empirically indistinguishable scientific hypotheses, to events that are in principle unobservable for me. Am I just doing it wrong, in your opinion?
What kind of a notion of truth? The kind that requires absolute certainty? But I’m not aware of anyone arguing that one needs that kind of truth for the JTB account, or to make sense of probabilistic reasoning. Why do you think that kind of notion of truth is needed?
I’m not arguing for any kind of notion of truth. I thought the kind of notion of truth JTB seems to be assuming is confusing as hell, and I wanted clarification for what it was trying to say.
My objection started from here:
Can you get back to that, because I don’t understand you anymore?
OK, I guess we were talking past each other. What is it about that particular claim that you find objectionable? I thought what you were objecting to was the notion that a proposition being true is distinct from it being assigned probability 1, and I was responding to that. But are you objecting to something else?
Is your objection just that you don’t understand what people mean by “true” in the JTB account? I don’t think they’re committed to any particular notion, except for the claim that justification and truth are distinct. A belief can be highly justified and yet false, or not at all justified and yet true. Pretty much any of the theories discussed here would work. My personal preference is deflationism.
ETA: I posted this also on the top of this comment thread, so you can answer there if you wish.
The way I read the quote is:
This in turn could be read as:
Do you now understand my objection? I predict it’s based on some grave misunderstanding. Thanks for the link, I’ll try to check it out when I have more time.