3C1: The correspondence relation must be some sort of resemblance relation. But truthbearers do not resemble anything in the world except other truthbearers—echoing Berkeley’s “an idea can be like nothing but an idea”.
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Eliezer thinks there is an arrow leading to “Snow is white” from the fact that snow is white.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Most people are moral realists, so if your theory of truth is inconsistent with moral realism, they will take that as evidence that your theory of truth is not correct.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
one we should try to explain as part of our project of accounting for the patterns of human behavior.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
The widespread nature of belief in values-as-truths cries out for explanation
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
‘True’ is just a word.
Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble.
So you say. Most rationalists are engaged in some sort of wider debate.
sometimes it’s worthwhile to participate in the gerrymandering
Even if it is intellectually dishonest to do so?
First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems.
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
This is actually a very easy one to respond to. Truthbearers do resemble non-truthbearers. What must ultimately be truth-bearing, if anything really is, is some component of the world—a brain-state, an utterance, or what-have-you. These truth-bearing parts of the world can resemble their referents, in the sense that a relatively simple and systematic transformation on one would yield some of the properties of the other. For instance, a literal map clearly resembles its territory; eliminating most of the territory’s properties, and transforming the ones that remain in a principled way, could produce the map. But sentences also resemble the territories they describe, e.g., through temporal and spatial correlation. Even Berkeley’s argument clearly fails for this reason; an immaterial idea can systematically share properties with a non-idea, if only temporal ones.
Language use is a natural phenomenon. Hence, reference is also a natural phenomenon, and one we should try to explain as part of our project of accounting for the patterns of human behavior. Here, we’re trying to understand why humans assert “Snow is white” in the particular patterns they do, and why they assign truth-values to that sentence in the patterns they do. The simplest adequate hypothesis will note that usage of “snow” correlates with brain-states that in turn resemble (heavily transformed) snow, and that “white” correlates with brain-states resembling transformed white light, and that “Snow is white” expresses a relationship between these two phenomena such that white light is reflected off of snow. When normal English language users think white light reflects off of snow, they call the sentence “snow is white” true; and when they think the opposite, they call “snow is white” false. So, there is a physical relationship between the linguistic behavior of this community and the apparent properties of snow.
Yes, but is our goal to convince everyone that we’re correct, or to be correct? The unpopularity of moral anti-realism counts against the rhetorical persuasiveness of a correspondence theory combined with a conventional scientific world-view. But it will only count against the plausibility of this conjunction if we have reason to think that moral statements are true in the same basic way that statements about the whiteness of snow are true.
In brief, I disagree that we are trying to explain human behavior. We are trying to develop an agent-universal explanation of truth. The risk of focusing on human behavior (or human brain states) is that the theory of truth won’t generalize to non-human agents.
Regarding moral facts, I agree that our goal is true philosophy, not comforting philosophy. I’m a moral anti-realist independent of theory-of-truth considerations. But most people seem to feel that their moral senses are facts (yes, I’m well aware of the irony of appealing to universal intuitions in a post that urges rejection of appeals to universal intuitions).
The widespread nature of belief in values-as-truths cries out for explanation, and the only family of theories I’m aware of that even try to provide such an explanation is wildly controversial and unpopular in the scientific community.
I’m not sure ‘agent’ is a natural kind. ‘Truth’ may not be a natural kind either; it may be a very gerrymandered, odd-looking collection of properties. So I spoke in terms of concrete human behaviors in order to maintain agnosticism about how generalizable these properties are. If they do turn out to be generalizable, then great. I don’t think any part of my account precludes that possibility.
Yes. My explanation is that our mental models do treat values as though they were real properties of things. Similarly, our mental models treat chairs as discrete solid objects, treat mathematical objects as mind-independent reals, treat animals as having desires and purposes, and treat possibility and necessity as worldly facts. In all of these cases, our evidence for the metaphysical category actually occurring is much weaker than our apparent confidence in the category’s reality. So the problem is very general; it seems that most of our beliefs are predicated on useful fictions (analogous to our willingness to affirm the truth of ‘Sherlock Holmes is a detective, not a carpenter’), in which case we are committed either to an error theory or to revising our standards for what ‘truth’ is.
If so. rationalists may as well shut up shop, because anyone would be able to add an interest-specific lump to the gerrymander.
ETA
I go for the third option.
People already do that, and yet rationalists see no reason to ‘shut up shop’ as a result. ‘True’ is just a word. Rationality is about systematic optimization for our goals, not about defending our favorite words from the rabble. Sometimes it’s worthwhile to actively criticize a use of ‘truth;’ sometimes it’s worthwhile to participate in the gerrymandering ourselves; and sometimes it’s worthwhile just to avoid getting involved in the kerfuffle. For instance, criticizing people for calling ‘Sherlock Holmes is a detective’ true is both less useful and less philosophically interesting than criticizing people for calling ‘there is exactly one empty set’ true.
Also, it’s important to remember that there are two different respects in which ‘truth’ might be gerrymandered. First, it might be gerrymandered for purely social reasons. Second, it might be gerrymandered because it’s a very complicated property of high-level representational systems. One should not expect mental states in general to be simply and nondisjunctively definable in a strictly physical language. Yet if we learned that ‘pain’ were a highly disjunctive property rather than a natural kind, this would give us no reason to stop deeming pain unpleasant.
People try to do that, but rationalists don’t have to regard it as legitimate, and can object. However, if a notion of truth is adopted that is pluralistic and has no constraint on its pluralism—Anythng Goes—rationalists could no longer object to,eg. Astrological Truth.
So you say. Most rationalists are engaged in some sort of wider debate.
Even if it is intellectually dishonest to do so?
I think you may have confused truth with statesof-mind-having-content-about-truth. Electrons are simple, thoughts about them aren’t.
Somethings not being a natural kind, is not justification for arbitrarily changing its definition. I don’t get to redefine the taste of chocolate as a kind of pain.
No one on this thread, up till now, has mentioned an arbitrarily changing or anything goes model of truth. Perhaps you misunderstood what I meant by ‘gerrymandered.’ All I meant was that the referent of ‘truth’ in physical or biological terms may be an extremely complicated and ugly array of truth-bearing states. Conceding that doesn’t mean that we should allow ‘truth’ (or any word) to be used completely anarchically.
It might be. Then philosphers would be correct to look for a sense that all those referents have in common.