I’m afraid there’s too big of an inferential gap between us, and I’m not getting much out of your comment. As an example of one confusion I have, when you say:
This is because, at a universal-enough level of knowledge, “truth” becomes ill-defined
you seem to assuming a specific theory of truth, which I’m not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?
I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?
The purpose of my remarks following the part you quoted was to clarify what I meant, so I’m not sure what to do when you cut that explanation off and plead incomprehension.
I’ll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that “100 angels can dance on the head of a pin”. As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you’re no longer answering anything like e.g. “Do humans have free will?” or about angels—both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.
(Edit: So once you’ve learned enough, you no longer care if “Do humans have free will?” is “true”, or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.)
I looked at the list of theories of truth you linked, and they don’t seem to address (or be robust against) the kind of situation we’re talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I’m judging answers to philosophical questions by, though.
You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.
I’m afraid there’s too big of an inferential gap between us, and I’m not getting much out of your comment. As an example of one confusion I have, when you say:
you seem to assuming a specific theory of truth, which I’m not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?
I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?
The purpose of my remarks following the part you quoted was to clarify what I meant, so I’m not sure what to do when you cut that explanation off and plead incomprehension.
I’ll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that “100 angels can dance on the head of a pin”. As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you’re no longer answering anything like e.g. “Do humans have free will?” or about angels—both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.
(Edit: So once you’ve learned enough, you no longer care if “Do humans have free will?” is “true”, or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.)
I looked at the list of theories of truth you linked, and they don’t seem to address (or be robust against) the kind of situation we’re talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I’m judging answers to philosophical questions by, though.
Thanks, that’s actually much clearer to me.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.