Are you saying there is no such thing as true and false in philosophy (only confusing and confusion-extinguishing), or that given the choice between a true but confusing answer and a false but confusion-extinguishing answer, you’d choose the latter?
Maybe I started sounding a little thick-headed to you, as I have in the past, so let me try to rephrase my criticism more substantively.
For the class of questions you’re referring to, I believe that as you gain more and more knowledge, and are able to better refine what you’re asking for in light of what you (and future self-modifications) want, it will turn out that the thing you’re actually looking for is better described as “confusion extinguishment” rather than “truth”.
This is because, at a universal-enough level of knowledge, “truth” becomes ill-defined, and what you really want is an understandable mapping from yourself to reality. In our current state, with a specific ontology and language assumed, we can take an arbitrary utterance and classify it as true or false (edit: or unknown or meaningless). But as that ontology adjusts to account for new knowledge, there is no natural grounding from which to judge statements, and so you “cut out the middle” and search directly for the mapping from an encoding to useful predictions about reality, in which the encoding is only true or false relative to a model (or “decompressor”).
(Similarly, whether I’m lying to you depends on whether you are aware of the encoding I’m using, and whether I’m aware of this awareness. If the truth is “yes”, but you already know I’ll say “no” if I mean “yes”, it is not lying for me to say “no”. Likewise, it is lying if I predicate my answer on a coinflip [when you’re not asking about a coin flip] -- even if the coinflip results in giving me the correct answer. Entanglement, not truth, is the key concept here.)
Therefore, in the limit of infinite knowledge, the goal you will be seeking will look more like “confusion extinguishment” than “truth”.
it will turn out that the thing you’re actually looking for is better described as “confusion extinguishment” rather than “truth”.
This is because, at a universal-enough level of knowledge, “truth” becomes ill-defined, and what you really want is an understandable mapping from yourself to reality
Rather than “truth” being ill-defined, I would rather want to say that the problem is simply that an answer of the form “true” or “false” will typically convey fewer bits of information than an answer that would be described as “confusion-extinguishing”; the latter would usually involve carving up your hypothesis-space more finely and directing your probability-flow more efficiently toward smaller regions of the space.
Fair enough: I think it can be rephrased as a problem about declining helpfulness of “true/false” answers as your knowledge expands and becomes more well-grounded.
I’m afraid there’s too big of an inferential gap between us, and I’m not getting much out of your comment. As an example of one confusion I have, when you say:
This is because, at a universal-enough level of knowledge, “truth” becomes ill-defined
you seem to assuming a specific theory of truth, which I’m not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?
I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?
The purpose of my remarks following the part you quoted was to clarify what I meant, so I’m not sure what to do when you cut that explanation off and plead incomprehension.
I’ll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that “100 angels can dance on the head of a pin”. As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you’re no longer answering anything like e.g. “Do humans have free will?” or about angels—both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.
(Edit: So once you’ve learned enough, you no longer care if “Do humans have free will?” is “true”, or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.)
I looked at the list of theories of truth you linked, and they don’t seem to address (or be robust against) the kind of situation we’re talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I’m judging answers to philosophical questions by, though.
You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.
I’m saying that the “confusion-extinguishing” heuristic is a better one for identifying good answers to philosophical questions, as judged by me, and probably as judged by you as well.
Also that, given the topic matter, truth may be undecidable for some questions (owing to the process by which philosophers arrived at them), in which case you’d want the confusion-extinguishing answer anyway.
“confusion-extinguishing” heuristic is a better one
Better than what? Better than “it seems true to me”? But I didn’t ask for “Answers That Seem True”.
“Confusion-extinguishing” may be the best heuristic I have now for arriving at the truth, but if someone else has come up with better heuristics, I want them to write about the answers they arrived at using those heuristics. I think I was right to identify what I actually want, which is truth, and not answers satisfying a particular heuristic.
Do you want to know whether “100 angels can dance on the head of a pin” is true, or do you want the confusion that generated that question to be extinguished?
I hope it isn’t a joke. I can see great use for a deconstruction of the many philosophical questions, failed philosophies, and most importantly, some kind of status report of more modern thought.
We’ve all heard Hume, Kant and Descartes, to name a few. But their ideas were formed long before the Scientific Revolution, which I arbitrarily deem to be the publishing of the Origin of the Species. It would be nice to point people arguing old school deontology, for example, to Wei Dei’s chapter: True Answers About Why Good Will Alone Is Insufficient.
In some ways I like this idea, but in some ways I don’t think it would work. Suppose, for example, that I produce a post entitled “The real reason why philosophical realism sucks”. The post consists of 20 lines or so of aphorisms, each a link to a more complete philosophical argument. Cool, potentially informative, and very likely useful as a reference. But how would you discuss a posting like that in the comments?
Suppose acting out of concern for the morality of my future selves was moral.
For a reductio, assume moral motive was sufficient for moral action. Suppose you self-modified yourself into a paperclipper, who believed it was moral to make paperclips. Now, post-modification you could be moral by making paperclips. Recognising this, your motive in self-modifying is to help your future self to act morally. Hence, by our Kantian assumption, the self-modification was moral. Hence it is moral to become a paperclipper!
True Answers for Every Philosophical Question
I don’t want true answers to those questions; I want confusion-extinguishing ones.
Are you saying there is no such thing as true and false in philosophy (only confusing and confusion-extinguishing), or that given the choice between a true but confusing answer and a false but confusion-extinguishing answer, you’d choose the latter?
Maybe I started sounding a little thick-headed to you, as I have in the past, so let me try to rephrase my criticism more substantively.
For the class of questions you’re referring to, I believe that as you gain more and more knowledge, and are able to better refine what you’re asking for in light of what you (and future self-modifications) want, it will turn out that the thing you’re actually looking for is better described as “confusion extinguishment” rather than “truth”.
This is because, at a universal-enough level of knowledge, “truth” becomes ill-defined, and what you really want is an understandable mapping from yourself to reality. In our current state, with a specific ontology and language assumed, we can take an arbitrary utterance and classify it as true or false (edit: or unknown or meaningless). But as that ontology adjusts to account for new knowledge, there is no natural grounding from which to judge statements, and so you “cut out the middle” and search directly for the mapping from an encoding to useful predictions about reality, in which the encoding is only true or false relative to a model (or “decompressor”).
(Similarly, whether I’m lying to you depends on whether you are aware of the encoding I’m using, and whether I’m aware of this awareness. If the truth is “yes”, but you already know I’ll say “no” if I mean “yes”, it is not lying for me to say “no”. Likewise, it is lying if I predicate my answer on a coinflip [when you’re not asking about a coin flip] -- even if the coinflip results in giving me the correct answer. Entanglement, not truth, is the key concept here.)
Therefore, in the limit of infinite knowledge, the goal you will be seeking will look more like “confusion extinguishment” than “truth”.
Rather than “truth” being ill-defined, I would rather want to say that the problem is simply that an answer of the form “true” or “false” will typically convey fewer bits of information than an answer that would be described as “confusion-extinguishing”; the latter would usually involve carving up your hypothesis-space more finely and directing your probability-flow more efficiently toward smaller regions of the space.
Fair enough: I think it can be rephrased as a problem about declining helpfulness of “true/false” answers as your knowledge expands and becomes more well-grounded.
I’m afraid there’s too big of an inferential gap between us, and I’m not getting much out of your comment. As an example of one confusion I have, when you say:
you seem to assuming a specific theory of truth, which I’m not familiar with. Perhaps you can refer me to it, or consider expanding your comment into a post?
I thought I just explained it in the same paragraph and in the parenthetical. Did you read those? If so, which claim do you find implausible or irrelevant to the issue?
The purpose of my remarks following the part you quoted was to clarify what I meant, so I’m not sure what to do when you cut that explanation off and plead incomprehension.
I’ll say it one more time in a different way: You make certain assumptions, both in the background, and in your language, when you claim that “100 angels can dance on the head of a pin”. As those assumptions turn out false, they lose importance, and you are forced to ask a different question with different assumptions, until you’re no longer answering anything like e.g. “Do humans have free will?” or about angels—both your terms, and your criteria for deciding when you have an acceptable answer, have changed so as to render the original question irrelevant and meaningless.
(Edit: So once you’ve learned enough, you no longer care if “Do humans have free will?” is “true”, or even what such a thing means. You know why you asked about the phenomenon you had in mind with the question, thus “unasking” the question.)
I looked at the list of theories of truth you linked, and they don’t seem to address (or be robust against) the kind of situation we’re talking about here, in which the very assumptions behind claims are undergoing rapid change, and necessitate changes to the language in which you express claims. The pragmatic (#2) sounds closest to what I’m judging answers to philosophical questions by, though.
Thanks, that’s actually much clearer to me.
But can’t that knowledge be expressed as a truth in some language, even if not the one that I used when I first asked the question? To put it another way, if I’m to be given confusion extinguishing answers, I still want them to be true answers, because surely there are false answers that will also extinguish my confusion (since I’m human and flawed).
I’m worried about prematurely identifying the thing we want with heuristics for obtaining that thing. I think we are tempted to do this when we want to clearly express what we want, and we don’t understand it, but we do understand the heuristics.
Do you understand my worry, and if so, do you think it applies here?
I think I understand your worry: you think there’s a truth thing separate from the heuristic I gave, and that the latter is just a loose approximation that we should not use as a replacement for the former.
I differ in that I think it’s the reverse: truth always “cashes out” as a useful self-to-reality model, and this becomes clearer as your model gets more accurate. Rather than a just a heuristic, it is ultimately what you want when you say you are seeking the truth. And any judgment that you have reached the truth will fall back on the question of whether your have a useful self-to-reality model.
To put it another way, what if the model you were given performs perfectly? Would you have any worry that, “okay, sure, this is able to accurately capture the dynamics of all phenomena I am capable of observing … but what if it’s just tricking me? This might not all be really true.” I would say at that point, you have your priorities reversed: if something fails at being “truth” but can perform that well, this “non-truth” is no longer something you should care about.
I’m saying that the “confusion-extinguishing” heuristic is a better one for identifying good answers to philosophical questions, as judged by me, and probably as judged by you as well.
Also that, given the topic matter, truth may be undecidable for some questions (owing to the process by which philosophers arrived at them), in which case you’d want the confusion-extinguishing answer anyway.
Better than what? Better than “it seems true to me”? But I didn’t ask for “Answers That Seem True”.
“Confusion-extinguishing” may be the best heuristic I have now for arriving at the truth, but if someone else has come up with better heuristics, I want them to write about the answers they arrived at using those heuristics. I think I was right to identify what I actually want, which is truth, and not answers satisfying a particular heuristic.
Do you want to know whether “100 angels can dance on the head of a pin” is true, or do you want the confusion that generated that question to be extinguished?
(It’s true, by the way.)
Do you think this is possible right now? Would this be a joke post that you want to read, or something?
I hope it isn’t a joke. I can see great use for a deconstruction of the many philosophical questions, failed philosophies, and most importantly, some kind of status report of more modern thought.
We’ve all heard Hume, Kant and Descartes, to name a few. But their ideas were formed long before the Scientific Revolution, which I arbitrarily deem to be the publishing of the Origin of the Species. It would be nice to point people arguing old school deontology, for example, to Wei Dei’s chapter: True Answers About Why Good Will Alone Is Insufficient.
In some ways I like this idea, but in some ways I don’t think it would work. Suppose, for example, that I produce a post entitled “The real reason why philosophical realism sucks”. The post consists of 20 lines or so of aphorisms, each a link to a more complete philosophical argument. Cool, potentially informative, and very likely useful as a reference. But how would you discuss a posting like that in the comments?
Suppose acting out of concern for the morality of my future selves was moral.
For a reductio, assume moral motive was sufficient for moral action. Suppose you self-modified yourself into a paperclipper, who believed it was moral to make paperclips. Now, post-modification you could be moral by making paperclips. Recognising this, your motive in self-modifying is to help your future self to act morally. Hence, by our Kantian assumption, the self-modification was moral. Hence it is moral to become a paperclipper!
Full content of the actual post:
“I’m not sure.”
How to Build a Friendly AI (with Source Code Examples)