Theories of truth are motivated by questions such as:
Why is ‘snow is white’ true?
Correspondence theories of truth generally say something like ‘snow is white’ is true because it maps onto the whiteness of real snow. A general description of such theories is the idea that “truth consists in a relation to reality.”
In your proposal, it appears that reality is defined as an individual’s experience, while the relation is prediction. The sentence ‘snow is white’ is true because that sentence predicts (relation) experience (reality). As such, it would be beneficial to my understanding if you either:
Emphasized that you are proposing a particular correspondence theory of truth, rather than an alternative to correspondence theory, OR
More clearly described why this is not a correspondence theory of truth.
The sentence ‘snow is white’ is true because that sentence predicts (relation) experience (reality).
I’ll give my interpretation, although I don’t know whether Gordon would agree:
What you’re saying here isn’t my read. The sentence “Snow is white” is true to the extent that it guides your anticipations. The sentence doesn’t predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.
This is important for a handful of reasons. Here are a few:
In this theory of truth, things can’t be true or false independent of an experiencer. Sentences can’t be true or false. Equations can’t be true or false. What’s true or false is the interaction between a communication and a being who understands.
Things that aren’t clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.
So in some sense, this vision of truth aims less at “Do these symbols point in the correct direction given these formal rules?” and more at “Does this matter?”
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.
Thanks for your reply here, Val! I’ll just add the following:
There’s a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn’t really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.
That said, it’s a fair point that we’re still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we’ve cut out metaphysical assumptions, there’s nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there’s some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.
I don’t get into this above, but this is why I think “truth” in itself is not that interesting; “usefulness to a purpose” is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that’s grounded in the most parsimonious story I know to tell about how we think.
I think… (correct me if I’m wrong, trying to check myself here as well as responding)
If you thought that “The snow is white” was true, but it turns out that the snow is, in fact, red, then your statement was false.
In the anticipation-prediction model, “The snow is white” (appears) to look more like “I will find ‘The snow is white’ true to my perceptions”, and it is therefore still true.
If you thought that “The snow is white” was true, but it turns out that the snow is, in fact, red, then your statement was false.
The issue is the meaning of a preposition. What does the meaning of the colors correspond to?
By asserting that the statement is wrong, you are going with a definition which relies on something...which colorblind people can’t see. In their ontology prior to finding out about colorblindness the distinction you are making isn’t on the map. Without a way to see the colors in question, then provided the difference is purely ‘no distinction between the two effected colors at all’, knowledge about which is which would have to be come from other people. (Though learning to pay attention to other distinguishing features may sometimes help.)
It’s not immediately obvious that being colorblind effects perceptions of snow. (Though it might—colors that otherwise seem similar and blend in with each other can stand out more to people with colorblindness.)
A common version is red-green. (From what I’ve heard, the light that means go in the U.S., looks exactly the same as the light that means stop—by color. But not by position, as long as everything is exactly where it’s supposed to be.)
Your perceptions have no relation to the truth (except where the proposition relates to your perceptions) in the correspondence theory of truth, AIUI. Colorblindness has no relation whatsoever to the truth value of “the snow is white”.
If you had meant to ask the truth value of “The snow looks white to me”, that’s an entirely different story (since the proposition is entirely different).
If we give up any assumption that there’s an external reality and try to reason purely from our experience, then in what sense can there be any difference between “the snow is white” and “the snow looks white to me”? This is, in part, what I’m trying to get at in the post: the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there’s something like a reality where it could independent of any observer have some meaning where snow is white, whereas part of the point of the post that this is nonsense, there must always be some observer, they decide what is white and not, and so the truth of snow being white is entirely contingent on the experience of this observer. Since everything we know is parsed through the lens of experience, we have no way to ground truth in anything else, so we cannot preclude the possibility that we only think snow is quite because of how our visual system works. In fact, it’s quite likely this is so, and we could easily construct aliens who would either disagree or would at least be unable to make sense of what “snow is white” would mean since they would lack something like a concept of “white” or “snow” and thus be unable to parse the proposition.
the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there’s something like a reality where it could independent of any observer have some meaning where snow is white
I think reality exists independently.
However, ‘senses’ may:
Be based on visual processing with a set of cones. (A smaller set of cones will, predictably, make different predictions than a larger set, that is the same, plus one.)
Be based on visual processing which can in some way be ‘wrong’ (first it looks one way. Without it changing, more processing occurs and it resolves properly)
Be somewhat subjective. (We look at a rock and see a face. Maybe ‘aliens’ don’t do that. Or maybe they do.)
Since everything we know is parsed through the lens of experience
My point was less about making a claim about an inability to see beyond that. More—we parse things. Actively. That is a part of how we give them meaning, and after giving them meaning, decide they are true. (The process is a bit more circular than that.)
For example: This sentence is false. (It’s nonsense.) This sentence is not non-sense. (It’s nonsense. It’s true! Yeah, but it doesn’t mean anything, there’s no correspondence to anything.)
we cannot preclude the possibility that we only think snow is quite because of how our visual system works.
Yes. Also maybe not.
Yes: it may seem like colors could be a construct to help with stuff like seeing predators, and if there are optical illusions that can fool us, what of it? If the predator in the tree isn’t able to catch and kill us, our visual system is doing spectacular, even if it’s showing us something that ‘isn’t real’.
Maybe not: Perhaps we can design cameras and measure light. Even if a spectrum of light isn’t captured well by our eyes, we can define a system based around measurements even if our eyes can’t perceive them.
We can sometimes bootstrap an ‘objective’ solution.
But that doesn’t mean we can always pull it off. If a philosopher asks us to define furniture, we may stumble at ‘chair’. You can sit on it. So couches are chairs?
And so philosophical solutions might be devised, by coming up with new categories defined by more straightforward properties: sitting-things (including couches chairs, and comfortable rocks that are good for sitting). But ‘what is a chair’ may prove elusive. ‘What is a game’ may have multiple answers, and people with different tastes may find some fun and others not, perhaps messing with the idea of the ‘objective game’. And yet, if certain kind of people do tend to enjoy it, perhaps there is still something there...
(Meant as a metaphor)
When someone asks for a chair, they may have expectations. If they are from far away, perhaps they will be surprised when they see your chairs. Perhaps there are different styles where they come from, or it’s the same styles, just used for different things.
You probably do well enough that, an implicit ‘this is a chair’ is never not true. But also, maybe you don’t have a chair, but still find a place they can sit that does just as well.
Maybe people care about purpose more than truth. And both may be context dependent. A sentence can have a different meaning in different contexts.
For a first point, I kind of thought the commenter was asking the question from within a normal theory. If they weren’t, I don’t know what they were asking really, but I guess hopefully someone else will.
For a second point, I’m not sure your theory is meaningfully true. Although there are issues with the fact that you could be a brain in a jar (or whatever), that doesn’t imply there must not be some objective reality somewhere.
Say I have the characters “Hlo elt!” and you have “el,raiy”. Also say that you are so far from me that we will never meet.
There is a meaningful message that can be made from interleaving the two sets (“Hello, reality!”). Despite this, we are so far away that no one can ever know this. Is the combination an objective fact? I would call it one, despite the fact that the system can never see it internally, and only a view from outside the system can.
Similarly to the truth, agents inside the system can find some properties of my message, like its length (within some margins). They might even be able to look through a dictionary and find some good guesses as to what it might be. I think this shows that an internal representation of an object is not required for an object to exist in a system.
I started replying to the aliens and the snow bit, but I honestly think I was going to stretch the metaphor too far.
Theories of truth are motivated by questions such as:
Why is ‘snow is white’ true?
Correspondence theories of truth generally say something like ‘snow is white’ is true because it maps onto the whiteness of real snow. A general description of such theories is the idea that “truth consists in a relation to reality.”
In your proposal, it appears that reality is defined as an individual’s experience, while the relation is prediction. The sentence ‘snow is white’ is true because that sentence predicts (relation) experience (reality). As such, it would be beneficial to my understanding if you either:
Emphasized that you are proposing a particular correspondence theory of truth, rather than an alternative to correspondence theory, OR
More clearly described why this is not a correspondence theory of truth.
I’ll give my interpretation, although I don’t know whether Gordon would agree:
What you’re saying here isn’t my read. The sentence “Snow is white” is true to the extent that it guides your anticipations. The sentence doesn’t predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.
This is important for a handful of reasons. Here are a few:
In this theory of truth, things can’t be true or false independent of an experiencer. Sentences can’t be true or false. Equations can’t be true or false. What’s true or false is the interaction between a communication and a being who understands.
This also means that questions can be true or false (or some mix). The fallacy of privileging the hypothesis gestures in this direction.
Things that aren’t clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.
So in some sense, this vision of truth aims less at “Do these symbols point in the correct direction given these formal rules?” and more at “Does this matter?”
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.
Thanks for your reply here, Val! I’ll just add the following:
There’s a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn’t really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.
That said, it’s a fair point that we’re still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we’ve cut out metaphysical assumptions, there’s nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there’s some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.
I don’t get into this above, but this is why I think “truth” in itself is not that interesting; “usefulness to a purpose” is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that’s grounded in the most parsimonious story I know to tell about how we think.
How does subjective probability require the possibility of a view from nowhere?
What happens to the correspondence theory of truth if you find out you’re colorblind?
I think… (correct me if I’m wrong, trying to check myself here as well as responding)
If you thought that “The snow is white” was true, but it turns out that the snow is, in fact, red, then your statement was false.
In the anticipation-prediction model, “The snow is white” (appears) to look more like “I will find ‘The snow is white’ true to my perceptions”, and it is therefore still true.
The issue is the meaning of a preposition. What does the meaning of the colors correspond to?
By asserting that the statement is wrong, you are going with a definition which relies on something...which colorblind people can’t see. In their ontology prior to finding out about colorblindness the distinction you are making isn’t on the map. Without a way to see the colors in question, then provided the difference is purely ‘no distinction between the two effected colors at all’, knowledge about which is which would have to be come from other people. (Though learning to pay attention to other distinguishing features may sometimes help.)
It’s not immediately obvious that being colorblind effects perceptions of snow. (Though it might—colors that otherwise seem similar and blend in with each other can stand out more to people with colorblindness.)
A common version is red-green. (From what I’ve heard, the light that means go in the U.S., looks exactly the same as the light that means stop—by color. But not by position, as long as everything is exactly where it’s supposed to be.)
Your perceptions have no relation to the truth (except where the proposition relates to your perceptions) in the correspondence theory of truth, AIUI. Colorblindness has no relation whatsoever to the truth value of “the snow is white”.
If you had meant to ask the truth value of “The snow looks white to me”, that’s an entirely different story (since the proposition is entirely different).
If we give up any assumption that there’s an external reality and try to reason purely from our experience, then in what sense can there be any difference between “the snow is white” and “the snow looks white to me”? This is, in part, what I’m trying to get at in the post: the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there’s something like a reality where it could independent of any observer have some meaning where snow is white, whereas part of the point of the post that this is nonsense, there must always be some observer, they decide what is white and not, and so the truth of snow being white is entirely contingent on the experience of this observer. Since everything we know is parsed through the lens of experience, we have no way to ground truth in anything else, so we cannot preclude the possibility that we only think snow is quite because of how our visual system works. In fact, it’s quite likely this is so, and we could easily construct aliens who would either disagree or would at least be unable to make sense of what “snow is white” would mean since they would lack something like a concept of “white” or “snow” and thus be unable to parse the proposition.
Status: overly long.
I think reality exists independently.
However, ‘senses’ may:
Be based on visual processing with a set of cones. (A smaller set of cones will, predictably, make different predictions than a larger set, that is the same, plus one.)
Be based on visual processing which can in some way be ‘wrong’ (first it looks one way. Without it changing, more processing occurs and it resolves properly)
Be somewhat subjective. (We look at a rock and see a face. Maybe ‘aliens’ don’t do that. Or maybe they do.)
My point was less about making a claim about an inability to see beyond that. More—we parse things. Actively. That is a part of how we give them meaning, and after giving them meaning, decide they are true. (The process is a bit more circular than that.)
For example: This sentence is false. (It’s nonsense.) This sentence is not non-sense. (It’s nonsense. It’s true! Yeah, but it doesn’t mean anything, there’s no correspondence to anything.)
Yes. Also maybe not.
Yes: it may seem like colors could be a construct to help with stuff like seeing predators, and if there are optical illusions that can fool us, what of it? If the predator in the tree isn’t able to catch and kill us, our visual system is doing spectacular, even if it’s showing us something that ‘isn’t real’.
Maybe not: Perhaps we can design cameras and measure light. Even if a spectrum of light isn’t captured well by our eyes, we can define a system based around measurements even if our eyes can’t perceive them.
We can sometimes bootstrap an ‘objective’ solution.
But that doesn’t mean we can always pull it off. If a philosopher asks us to define furniture, we may stumble at ‘chair’. You can sit on it. So couches are chairs?
And so philosophical solutions might be devised, by coming up with new categories defined by more straightforward properties: sitting-things (including couches chairs, and comfortable rocks that are good for sitting). But ‘what is a chair’ may prove elusive. ‘What is a game’ may have multiple answers, and people with different tastes may find some fun and others not, perhaps messing with the idea of the ‘objective game’. And yet, if certain kind of people do tend to enjoy it, perhaps there is still something there...
(Meant as a metaphor)
When someone asks for a chair, they may have expectations. If they are from far away, perhaps they will be surprised when they see your chairs. Perhaps there are different styles where they come from, or it’s the same styles, just used for different things.
You probably do well enough that, an implicit ‘this is a chair’ is never not true. But also, maybe you don’t have a chair, but still find a place they can sit that does just as well.
Maybe people care about purpose more than truth. And both may be context dependent. A sentence can have a different meaning in different contexts.
For a first point, I kind of thought the commenter was asking the question from within a normal theory. If they weren’t, I don’t know what they were asking really, but I guess hopefully someone else will.
For a second point, I’m not sure your theory is meaningfully true. Although there are issues with the fact that you could be a brain in a jar (or whatever), that doesn’t imply there must not be some objective reality somewhere.
Say I have the characters “Hlo elt!” and you have “el,raiy”. Also say that you are so far from me that we will never meet.
There is a meaningful message that can be made from interleaving the two sets (“Hello, reality!”). Despite this, we are so far away that no one can ever know this. Is the combination an objective fact? I would call it one, despite the fact that the system can never see it internally, and only a view from outside the system can.
Similarly to the truth, agents inside the system can find some properties of my message, like its length (within some margins). They might even be able to look through a dictionary and find some good guesses as to what it might be. I think this shows that an internal representation of an object is not required for an object to exist in a system.
I started replying to the aliens and the snow bit, but I honestly think I was going to stretch the metaphor too far.
Nothing much. A definition of truth doesn’t have to make the truth about everything available to every agent