The sentence ‘snow is white’ is true because that sentence predicts (relation) experience (reality).
I’ll give my interpretation, although I don’t know whether Gordon would agree:
What you’re saying here isn’t my read. The sentence “Snow is white” is true to the extent that it guides your anticipations. The sentence doesn’t predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.
This is important for a handful of reasons. Here are a few:
In this theory of truth, things can’t be true or false independent of an experiencer. Sentences can’t be true or false. Equations can’t be true or false. What’s true or false is the interaction between a communication and a being who understands.
Things that aren’t clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.
So in some sense, this vision of truth aims less at “Do these symbols point in the correct direction given these formal rules?” and more at “Does this matter?”
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.
Thanks for your reply here, Val! I’ll just add the following:
There’s a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn’t really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.
That said, it’s a fair point that we’re still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we’ve cut out metaphysical assumptions, there’s nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there’s some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.
I don’t get into this above, but this is why I think “truth” in itself is not that interesting; “usefulness to a purpose” is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that’s grounded in the most parsimonious story I know to tell about how we think.
I’ll give my interpretation, although I don’t know whether Gordon would agree:
What you’re saying here isn’t my read. The sentence “Snow is white” is true to the extent that it guides your anticipations. The sentence doesn’t predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.
This is important for a handful of reasons. Here are a few:
In this theory of truth, things can’t be true or false independent of an experiencer. Sentences can’t be true or false. Equations can’t be true or false. What’s true or false is the interaction between a communication and a being who understands.
This also means that questions can be true or false (or some mix). The fallacy of privileging the hypothesis gestures in this direction.
Things that aren’t clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.
So in some sense, this vision of truth aims less at “Do these symbols point in the correct direction given these formal rules?” and more at “Does this matter?”
I haven’t done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.
“Unifying epistemic and instrumental reality” doesn’t seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.
World-mapping is also a different thing from prediction-making, though they’re obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your ‘map’ and see what it says.
The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that’s broken and always says it’s 10am, but you happen to check it at 10am. Then you’re making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn’t the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.
All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn’t be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.
Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to ‘build them as you go’ and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.
Thanks for your reply here, Val! I’ll just add the following:
There’s a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn’t really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.
That said, it’s a fair point that we’re still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we’ve cut out metaphysical assumptions, there’s nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there’s some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.
I don’t get into this above, but this is why I think “truth” in itself is not that interesting; “usefulness to a purpose” is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that’s grounded in the most parsimonious story I know to tell about how we think.
How does subjective probability require the possibility of a view from nowhere?