What would you do if your processor returned “correct” some of the time and “false” the rest?
I thought that being rational—whatever it means on Less Wrong—departed from extreme epistemological nihilism. Suppose I were to meet you in nihilism land. How can we tell the difference between beliefs in the flying spaghetti monster and whether 2+2 can be 3 or 4? That’s why the nihilistic position is “silly”. While Jack, out there in making-sense-land, says that he knows 2+2=4.
The fact that you reach a conclusion does not force the universe to conform to your conclusion. It may be perfectly obvious to you that 2+2=4, just as it was perfectly obvious to the Greeks that Euclid’s Fifth Axiom was true or that there was universal time was clear to Newton. Nevertheless, they were wrong, as you may be wrong.
jBelief is not conviction. Conviction is not knowledge. Knowledge isn’t truth.
That seems at best trivially true, in that “knowledge is truth” seems to be committing a category mistake. A common-enough epistemological position is that knowledge is about true propositions.
More specifically, many in epistemology will define knowledge as “justified true belief”. So by this way of thinking, if “S knows X” is true, then X is true.
Well, while I agree with your sentiment, surely your statement is technically false. Indeed, one way to get around Gettier cases is to simply make “justified” a more difficult credential to obtain (not that I think that’s a good solution).
Also, many philosophers no longer think definitions need to specify necessary or sufficient conditions, and so would happily claim ‘justified true belief’ is a ‘good enough’ definition of knowledge.
No, you run the sentence “2+2=4” through a processor and note that the resulting evaluation is “correct”.
You then predict that you will produce the same evaluation again in the future, and that others will get the same result when they try it.
What would you do if your processor returned “correct” some of the time and “false” the rest?
Greg Egan’s short story “Luminous” is something the people reading this thread should take a look at.
I thought that being rational—whatever it means on Less Wrong—departed from extreme epistemological nihilism. Suppose I were to meet you in nihilism land. How can we tell the difference between beliefs in the flying spaghetti monster and whether 2+2 can be 3 or 4? That’s why the nihilistic position is “silly”. While Jack, out there in making-sense-land, says that he knows 2+2=4.
I’m sorry, but I don’t think that you understand.
The fact that you reach a conclusion does not force the universe to conform to your conclusion. It may be perfectly obvious to you that 2+2=4, just as it was perfectly obvious to the Greeks that Euclid’s Fifth Axiom was true or that there was universal time was clear to Newton. Nevertheless, they were wrong, as you may be wrong.
jBelief is not conviction. Conviction is not knowledge. Knowledge isn’t truth.
That seems at best trivially true, in that “knowledge is truth” seems to be committing a category mistake. A common-enough epistemological position is that knowledge is about true propositions.
More specifically, many in epistemology will define knowledge as “justified true belief”. So by this way of thinking, if “S knows X” is true, then X is true.
Nobody defines knowledge simply as justified true belief anymore. Everybody needs a workaround for Gettier cases.
Well, while I agree with your sentiment, surely your statement is technically false. Indeed, one way to get around Gettier cases is to simply make “justified” a more difficult credential to obtain (not that I think that’s a good solution).
Also, many philosophers no longer think definitions need to specify necessary or sufficient conditions, and so would happily claim ‘justified true belief’ is a ‘good enough’ definition of knowledge.
I’m probably overgeneralizing from the professors I’ve had. Your point is well taken.
Do you believe in truth, in general, at all? If so, how do humans experience truth?