That’s what it’s trying to be. Could you provide an example how you would express the exact same thought with different words? I’d like to know if I’m attacking a strawman here.
What do you mean when you say that the proposition “coincides with” what we know about the world?
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim. Otherwise we’re just playing tricks with our imaginations. As I tried to express before, I can imagine a true territory out there, but since nobody can verify it being there, i.e. have a perfect map, it’s a pointless concept for the purposes we’re discussing here.
That would be incoherent.
I’m trying to convey why a particular notion of truth is incoherent, but I’m not sure we agree about that yet.
I’ve seen science types try to reinteprret mainstream philosophy in terns of probability and information several times, and it tends to go no where. Why not understand philosophy in its own terms?
Often, the inability to state something in a mathematically precise way is an indication that the underlying idea is not precisely defined. This isn’t universally true, but it is a useful heuristic.
Sure, but asking “can we take this idea and state it in terms of math” is a useful question. Moreover, for those aspects of philosophy where one can do, this this often results in it becoming much more clear what is going on. The raven problem is a good example of this: this is a problem that really is difficult to follow, but when one states what is happening in terms of probability, the “paradox” quickly goes away. And this is true not just in philosophy but in many areas of interest. In fact, one problem philosophy has (and part of why it has such a bad reputation) is that once an area is sufficiently precisely defined, which often takes math, it becomes its own field. Math itself broke off from philosophy very early on, and physics also pretty early, but more recent breakoffs were linguistics, economics, and psychology,
One way of thinking about the goals of philosophy is define things precisely enough that people stop calling that thing philosophy. And one of the most effective ways historically to do so is using mathematical tools to help.
Sure, but “It can’t be stated in a mathematical framework that already does a good job of answering a lot of these questions, maybe we should try to adopt it so it can be, or maybe we should conclude that the idea really is confused if we have other information indicating it has problems, or maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then” are not the same thing as just throwing an idea out because it isn’t mathematically precise.
I think in general that LW should pay more attention to mainstream philosophy. I find it interesting how often people on LW don’t realize how much of the standard positions here overlap with Quine’s positions, and he’s clearly mainstream. It is possible that people on LW overestimate the usefulness of the “can this be mathematicized?” question, but that doesn’t stop it from being a very useful question to ask.
Well, I’d argue that in essence, all of the alternative scenarios you list for dealing with non-mathematicized problems do constitute throwing an idea out, insofar as they represent a reshaping of the question by people who didn’t initially propose it, i.e., a type of misrepresentation, although the last one (“maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then”) is an adequate way to deal with such problems.
it’s a pointless concept for the purposes we’re discussing here.
Seems to me it’s not pointless, because your failure to understand it is clearly holding you back...
Why are you failing to distinguish between “P” and “a person claiming P”? They are distinct things. Snow being white has nothing to do with who or what thinks snow is white. And there’s no reason anyone needs a “perfect map” to talk about truth any more than a perfect map is needed to talk about snow being white.
It’s not a matter of assigning probability 1 to a proposition, it’s a matter of the proposition actually being true.
How would you interpret “actually being true” here? Say you have evidence for a proposition that makes it 0.9 probable. How would you establish that the proposition is also true? (Understand that I’m not saying you should.)
If you have evidence that makes P 90% probable, then your evidence has established a 90% chance of P being true (which is to say, you are uncertain whether P is true or not, but you assign 90% of your probability mass to “P is true”, and 10% to “P is false”). The definition of “truth” that makes this work is very simple: let “P” and “P is true” be synonymous.
Perhaps. For the purposes of ‘knowledge’, whether or not you actually have knowledge of X depends on whether or not X is true, so knowledge is dependent on more than just your state of mind.
Someone upthread asked how you can “possibly have” the information that X is true, and in a sense you can’t, you can only get more certain of it.
How confident was that “perhaps”? Manfred seemed to agree with me that something fishy is going on. Pragmatist then steelmanned the JTB position by approaching it probabilistically.
I’m not interested in steelmanning these philosophers, I’m interested in what they actually think. Isn’t that the point of this series?
The ‘perhaps’ was more about whether you’d find it nonsensical or not. Some people do, some don’t. (For once, we actually have some related data about this, because knowledge has been a favorite subject of experimental philosophers. I’d have to look up some more studies/an analysis to be sure, but IIRC subjects were much more likely to accept the Gettier counterexamples as legitimate knowledge than philosophers).
True belief is so easily obtained that you can arrive at it by lucky guesses.
Justification is difficult.
Certain justification—certainty is about justification, not accuracy—is harder still, and may be impossible.
Whether you can have information that X is true depends on whether “information” means belief, justification, knowledge or something else.
Skeptics about knowledge tend to see truth as peerfect justification. Non-sceptics tend to see truth as an out-of-the-mind correspondence with thte world.
Certainty is usually not considered necessary for justification. Some very few people do, but there are plenty of skeptics who are making the stronger claim that we don’t have significant justification, not simply that we don’t have certainty
Half the people in a room believe, for no particular reason, that extraterrestrial life exists. The other half disbelieve it. Some of them will be right, but none of them know, because they have no systematic justifaction for their beliefs.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
Does the above question make sense to you? It doesn’t make sense to me.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
That is the realist (and, I think, common sense) attitude: that beliefs are rendered true by correspondence to chunks of reality.
Interpreting the meaning of “is true” and establishing that something “is true” are two different things—namely, semantics and epistemology. It’s common in science to sidestep semantic questions with operational answers, but that doesn’t necessarily work in other areas.
So if you agree about that, why are you saying things like
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim.
How is the “if” connected to the “then” of that sentence? Your thinking isn’t making any sense to me.
That’s what it’s trying to be. Could you provide an example how you would express the exact same thought with different words? I’d like to know if I’m attacking a strawman here.
If our p 0.9 proposition coincides with what the world is actually like, then we must assume someone has a 100 % accurate model of what the world is actually like to make that claim. Otherwise we’re just playing tricks with our imaginations. As I tried to express before, I can imagine a true territory out there, but since nobody can verify it being there, i.e. have a perfect map, it’s a pointless concept for the purposes we’re discussing here.
I’m trying to convey why a particular notion of truth is incoherent, but I’m not sure we agree about that yet.
Would the model still be 100% accurate if there were a label on P saying “only 90% certain”.?
Why don’t you read the paper and try how that fits yourself, and then ask yourself, is this really what they intend?
I’ve read Gettier’s famous apper, a long time ago, and he doesn’t disuss models or probabilities.
Do you think it can be understood in a probabilistic framework, or will that just yield nonsense?
I’ve seen science types try to reinteprret mainstream philosophy in terns of probability and information several times, and it tends to go no where. Why not understand philosophy in its own terms?
Often, the inability to state something in a mathematically precise way is an indication that the underlying idea is not precisely defined. This isn’t universally true, but it is a useful heuristic.
Hardly anything is mathematically precise. It’s not new that philosophy isn’t either.
Sure, but asking “can we take this idea and state it in terms of math” is a useful question. Moreover, for those aspects of philosophy where one can do, this this often results in it becoming much more clear what is going on. The raven problem is a good example of this: this is a problem that really is difficult to follow, but when one states what is happening in terms of probability, the “paradox” quickly goes away. And this is true not just in philosophy but in many areas of interest. In fact, one problem philosophy has (and part of why it has such a bad reputation) is that once an area is sufficiently precisely defined, which often takes math, it becomes its own field. Math itself broke off from philosophy very early on, and physics also pretty early, but more recent breakoffs were linguistics, economics, and psychology,
One way of thinking about the goals of philosophy is define things precisely enough that people stop calling that thing philosophy. And one of the most effective ways historically to do so is using mathematical tools to help.
“It can’t be stated in terms of maths, so throw it out” is not useful.
Sure, but “It can’t be stated in a mathematical framework that already does a good job of answering a lot of these questions, maybe we should try to adopt it so it can be, or maybe we should conclude that the idea really is confused if we have other information indicating it has problems, or maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then” are not the same thing as just throwing an idea out because it isn’t mathematically precise.
I think in general that LW should pay more attention to mainstream philosophy. I find it interesting how often people on LW don’t realize how much of the standard positions here overlap with Quine’s positions, and he’s clearly mainstream. It is possible that people on LW overestimate the usefulness of the “can this be mathematicized?” question, but that doesn’t stop it from being a very useful question to ask.
Well, I’d argue that in essence, all of the alternative scenarios you list for dealing with non-mathematicized problems do constitute throwing an idea out, insofar as they represent a reshaping of the question by people who didn’t initially propose it, i.e., a type of misrepresentation, although the last one (“maybe we should wait until experts have hashed out a bit more exactly what they mean and come back to the idea then”) is an adequate way to deal with such problems.
Seems to me it’s not pointless, because your failure to understand it is clearly holding you back...
Why are you failing to distinguish between “P” and “a person claiming P”? They are distinct things. Snow being white has nothing to do with who or what thinks snow is white. And there’s no reason anyone needs a “perfect map” to talk about truth any more than a perfect map is needed to talk about snow being white.
Quoting Chris:
How would you interpret “actually being true” here? Say you have evidence for a proposition that makes it 0.9 probable. How would you establish that the proposition is also true? (Understand that I’m not saying you should.)
If you have evidence that makes P 90% probable, then your evidence has established a 90% chance of P being true (which is to say, you are uncertain whether P is true or not, but you assign 90% of your probability mass to “P is true”, and 10% to “P is false”). The definition of “truth” that makes this work is very simple: let “P” and “P is true” be synonymous.
I agree with you here completely. I was just wondering if particular philosophers had something more nonsensical in mind.
Perhaps. For the purposes of ‘knowledge’, whether or not you actually have knowledge of X depends on whether or not X is true, so knowledge is dependent on more than just your state of mind.
Someone upthread asked how you can “possibly have” the information that X is true, and in a sense you can’t, you can only get more certain of it.
Did any of that help?
I think that someone was me :)
How confident was that “perhaps”? Manfred seemed to agree with me that something fishy is going on. Pragmatist then steelmanned the JTB position by approaching it probabilistically.
I’m not interested in steelmanning these philosophers, I’m interested in what they actually think. Isn’t that the point of this series?
The ‘perhaps’ was more about whether you’d find it nonsensical or not. Some people do, some don’t. (For once, we actually have some related data about this, because knowledge has been a favorite subject of experimental philosophers. I’d have to look up some more studies/an analysis to be sure, but IIRC subjects were much more likely to accept the Gettier counterexamples as legitimate knowledge than philosophers).
True belief is so easily obtained that you can arrive at it by lucky guesses. Justification is difficult. Certain justification—certainty is about justification, not accuracy—is harder still, and may be impossible. Whether you can have information that X is true depends on whether “information” means belief, justification, knowledge or something else. Skeptics about knowledge tend to see truth as peerfect justification. Non-sceptics tend to see truth as an out-of-the-mind correspondence with thte world.
Certainty is usually not considered necessary for justification. Some very few people do, but there are plenty of skeptics who are making the stronger claim that we don’t have significant justification, not simply that we don’t have certainty
Please expand. Give us an example.
Half the people in a room believe, for no particular reason, that extraterrestrial life exists. The other half disbelieve it. Some of them will be right, but none of them know, because they have no systematic justifaction for their beliefs.
In your opinion, does this apply even if people never encounter extraterrestial life and have no evidence for it, if there happens to be extraterrestial life?
Does the above question make sense to you? It doesn’t make sense to me.
That is the realist (and, I think, common sense) attitude: that beliefs are rendered true by correspondence to chunks of reality.
Yes. I don’t assume truth has to be in the head.
If science is falsifiable and therefore uncertain is any of it true? If not then I assume JTB must judge “scientific knowledge” to be an oxymoron.
If some scientific knowledge is true does that mean that the theory will not be revised, extended or corrected in the next 1,000 years?
Does truth apply to science? If not should “true” be included in our definition of knowledge?
The JTB per se does not say justificaiton must be certain
Interpreting the meaning of “is true” and establishing that something “is true” are two different things—namely, semantics and epistemology. It’s common in science to sidestep semantic questions with operational answers, but that doesn’t necessarily work in other areas.
Can you give more examples of such sidestepping where it doesn’t work?
It’s more a case of noting that there is no reason for it to work everywhere, and no evidene that it works outside of special cases.
I’m not, I know they’re distinct things. It seems to me you misundertood me. What’s with the tone?
I know that.
So if you agree about that, why are you saying things like
How is the “if” connected to the “then” of that sentence? Your thinking isn’t making any sense to me.
That quote shouldn’t make sense to you, and it’s not my thinking. Keep in mind I’m not endorsing a notion of truth here, I’m questioning it.
White and snow wouldn’t exist without someone thinking about them so I’m not sure what you’re trying to say here.
What goes on in mountains when no-one is thinking about them...?
I actually had this particular failure mode in mind when I was reponding to you. But let’s not go there, it’s not important.