You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.
Thanks, that’s actually much clearer to me.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.