Request for Steelman: Non-correspondence concepts of truth
A couple of days ago, Buybuydandavis wrote the following on Less Wrong:
I’m increasingly of the opinion that truth as correspondence to reality is a minority orientation.
I’ve spent a lot of energy over the last couple of days trying to come to terms with the implications of this sentence. While it certainly corresponds with my own observations about many people, the thought that most humans simply reject correspondence to reality as the criterion for truth seems almost too outrageous to take seriously. If upon further reflection I end up truly believing this, it seems that it would be impossible for me to have a discussion about the nature of reality with the great majority of the human race. In other words, if I truly believed this, I would label most people as being too stupid to have a real discussion with.
However, this reaction seems like an instance of a failure mode described by Megan McArdle:
I’m always fascinated by the number of people who proudly build columns, tweets, blog posts or Facebook posts around the same core statement: “I don’t understand how anyone could (oppose legal abortion/support a carbon tax/sympathize with the Palestinians over the Israelis/want to privatize Social Security/insert your pet issue here).” It’s such an interesting statement, because it has three layers of meaning.
The first layer is the literal meaning of the words: I lack the knowledge and understanding to figure this out. But the second, intended meaning is the opposite: I am such a superior moral being that I cannot even imagine the cognitive errors or moral turpitude that could lead someone to such obviously wrong conclusions. And yet, the third, true meaning is actually more like the first: I lack the empathy, moral imagination or analytical skills to attempt even a basic understanding of the people who disagree with me
In short, “I’m stupid.” Something that few people would ever post so starkly on their Facebook feeds.
With this background, it seems important to improve my model of people who reject correspondence as the criterion for truth. The obvious first place to look is in academic philosophy. The primary challenger to correspondence theory is called “coherence theory”. If I understand correctly, coherence theory says that a statement is true iff it is logically consistent with “some specified set of sentences”
Coherence is obviously an important concept, which has valuable uses for example in formal systems. It does not capture my idea of what the word “truth” means, but that is purely a semantics issue. I would be willing to cede the word “truth” to the coherence camp if we agreed on a separate word we could use to mean “correspondence to reality”. However, my intuition is that they wouldn’t let us to get away with this. I sense that there are people out there who genuinely object to the very idea of discussing whether a sentences correspond to reality.
So it seems I have a couple of options:
1. I can look for empirical evidence that buybuydandavis is wrong, ie that most people accept correspondence to reality as the criterion for truth
2. I can try to convince people to use some other word for correspondence to reality, so they have the necessary semantic machinery to have a real discussion about what reality is like
3. I can accept that most people are unable to have a discussion about the nature of reality
4. I can attempt to steelman the position that truth is something other than correspondence
Option 1 appears unlikely to be true. Option 2 seems unlikely to work. Option 3 seems very unattractive, because it would be very uncomfortable to have discussions that on the surface appear to be about the nature of reality, but which really are about something else, where the precise value of “something else” is unknown to me.
I would therefore be very interested in a steelman of non-correspondence concepts of truth. I think it would be important not only for me, but also for the rationalist community as a group, to get a more accurate model of how non-rationalists think about “truth”
“It would be very uncomfortable to have discussions that on the surface appear to be about the nature of reality, but which really are about something else, where the precise value of ‘something else’ is unknown to me.”
Indeed. I agree, although I find it extremely uncomfortable even when the something else is known to me.
For example, once I had a discussion with someone which seemed to be going pretty well, and which fully appeared to be about the nature of reality, and which we were both enjoying. Then at one point in the discussion I said something like “you know, the reason I thought X was true was because of Y”, where X and Y had some reference to another discussion that we had once held and in which we had disagreed.
The person responded, “now you’re ruining everything!!”
Why was I “ruining everything”? The reason is that I misunderstood the point of the discussion. I thought it was about the nature of reality. But, in fact, they simply intended it as a discussion about the relationship between the two of us, and they understood my reference to a discussion in which we disagreed as something harmful to the relationship.
In the end I have come to the very uncomfortable conclusion that at some level, most conversations are like this, and are not about the nature of reality even when they appear to be, and that most people in fact either never or almost never engage in conversations which are actually about the truth of the matter. And the result is that in most conversations I feel like I am speaking with aliens—although the truth may be that the aliens here are the people like us who are actually concerned with reality, and the others are normal human beings.
There is consequently a problem with your four options, although I would say that the third is basically true. It is not that people think that “truth” means something other than “correspondence with reality.” If you ask them what they mean, they will say it means that, and they will disagree with any other definition. But the very discussion about the meaning of truth, is not about the nature of reality, while your attempt to resolve the problems by discussing the nature of truth, is meant to be about reality. So when you engage in this discussion you be at cross purposes, and you will not be able to resolve anything. Nor will you be able to show people that they are unable to have a discussion about the nature of reality; they will be equally and similarly unable to accept that very truth, precisely because they are unable to have a discussion like that.
Basically I think Robin Hanson has it right with his definition of human beings as “homo hypocritus.” In theory people claim accept the correspondence theory of truth, but it is basically hypocrisy, and precisely for that reason, people are unable to have the kind of discussion you want, and they will never understand this nor the reason for it, and you can never explain it to them.
If people couldn’t come to acquire a correspondence theory over time, or come to acquire a ‘sense of reality’ over time, then I wouldn’t have either of those things today, since I didn’t start with them. I can remember relatively clearly what it was like to think of truth-claims primarily and consciously as tools or games, rather than as tokens mapping indifferent, objective states of affairs; and I can remember the feeling of changing my mind about that.
I agree with you that hypocrisy and self-deception are big human problems. Since this is a thread about steel-manning the other side, though, we should keep in mind the (e.g., game-theoretic) advantages to indirect communication. Refusing to develop the knowledge and social skills needed to read into others’ subtext and linguistic goals (based on an ideal of True Rationalists who speak literally, directly, and honestly in all contexts) would be straw Vulcan rationality. (Granting that mainstream society is more in need of honesty and openness, as a rule.)
Assertion-conditions for non-truth-functional things (e.g., ‘happy birthday!’, ‘could you pass the guacamole?’, ‘go away!’, ‘mmm, hot dogs’) can certainly be about the world, particularly if the facts of psychology are included as part of ‘the world.’ It makes sense to despise pointless ambiguity, but the same doesn’t hold for relevantly unambiguous (or for that matter usefully vague) indirect statements. We should also be a lot more careful about assigning the same value to ‘conversations about nothing-whatsoever’ as we do to ‘conservations about the participants’ affect’. I find it disturbing how easily we slide from the concept of ‘reality’ that includes mental states and the concept of ‘reality’ that’s defined in contrast to mental states.
I agree that Option 3 is correct here.
Personally I pretty much exclusively use face to face conversation for social reasons, such as building rapport/relationships, fighting status and dominance battles, bonding through shared experience, checking in for updates on moods and desires, or setting up plans. So OP, when you say that you can’t talk to many people about the nature of reality, my reaction is, “Of course! You’re using the wrong medium.”
You may have heard before that communication is only X% verbal (what you say), while the rest is paraverbal (how you say it) and nonverbal (what your body is doing). I don’t have a source for X and I don’t know if it has been rigorously studied, but everything I have seen points to X being low, around 5-10%. This implies that for most people, intrapersonal communication is largely about things other than the words being said. If you aren’t picking up on all of what’s being said through these channels, you’re likely amplifying the low bandwidth verbal information. That you’re confused why others are neglecting what’s being said verbally and that others are confused why you’re neglecting what’s being said paraverbally and nonverbally are two sides of the same coin.
Better mediums for the purpose of discussing reality, I believe, are textual (to provide record and reference, and to remove intrapersonal subtexts), and slow moving (to allow for parties to think through and clearly articulate their thoughts). Examples are academic papers and books, emails, and blogs, though these all also have drawbacks. I don’t think we have an ideal medium for discussing the nature of reality yet.
The blue apple test: Speak out the sentence “I’m a blue apple”. See how it feels. It likely makes you laugh because it’s absurd. That’s how beliefs that feel absurd feel like.
If you have that reference feeling you can speak a sentence like “I’m not worthy of success” and see how that sentence feels like. If it feels more true, than you believe it on some level even if as an rationalist you would never ever say that you believe “I’m not worthy of success”.
If the beliefs that actually drive your actions are radically different than the beliefs towards which you admit to when thinking intellectually about them, you are likely going to be plagued by akrasia.
On LW we have the alief/belief distinction. We might say that you don’t believe “I’m not worthy of success” when it fails the blue apple test but you just alieve it. Most people don’t distinguish the two and only deal in aliefs.
Having a conversation where you actually express your alives and express them can be interesting.
I went down the rabbit hole of researching the question “what is truth?” soon after I joined LW almost 3 years ago, and ended up with a rather unpopular anti-Platonic ontology of the term “truth” being worse than useless in most cases. The correspondence theory of truth stopped making sense to me because there is nothing for it to correspond to. So, it’s somewhat more radical than William James’s pragmatic theory of truth. But I guess this is probably not what you are interested in.
It corresponds to reality.
As for what reality is, I like Philip K. Dick’s formulation: “Reality is that which, when you stop believing in it, doesn’t go away.”
Right, it’s the last assumption that I ended up rejecting. But I’ve talked enough about it on this forum. And no, whatever straw interpretation of what I said that immediately comes to your mind, I don’t mean that.
ok what about the example from “the simple truth”—jumping off a cliff. Is it not the case that you will fall to your doom regardless of your beliefs? can it not be said that the truth is that all things fall towards the earth and that does correspond to reality?
The Simple Truth shows that bad models are bad. It is not an argument for or against a specific concept of truth, despite what Eliezer might have intended by it.
What makes a bad model “bad”, other than that it does not correspond to reality?
The predictions it makes are incorrect.
Ok I suspect we are using different definitions of ‘correspond’. ‘Correspond’ means “a close similarity; match or agree almost exactly.” In this context I have always interpreted ‘correspond to reality’ when applied to a model as meaning that models predictions have a close similarity; match or agree almost exactly with observation. That is to say, a model which corresponds to reality correctly predicts reality, by definition.
If my model says the sky should be blue, and I go out and look and the sky is blue, my model corresponds to reality. If my theory says the sky should be green, and I go out and look and discover the sky to be blue, then my model does not correspond to reality. It seems to me that a model which corresponds to reality and yet is incorrect (does not match the world) is a logical impossibility.
Therefore, I presume you must be using some other definition of ‘correspond’. What might that be?
Your wording:
My wording:
I.e. I make no claims about reality beyond it occasionally being a useful metamodel.
In the dualist reality+models ontology, yes. If you don’t make any ontological assumptions about anything ’existing” beyond models, the above statement is not impossible, it is meaningless, as it uses undefined terms.
Is there any actionable difference between the two viewpoints?
Yes. For example, you don’t bother arguing about untestables. Is MWI true? Who cares, unless you can construct a testable prediction out of this model, it is not even a meaningful question. What about Tegmark 4? Same thing.
You may care about different worlds to different extents, with “truth” of a possible world being the degree of caring. In that case, it may be useful for evaluating (the relative weights of) consequences of decisions, which may be different for different worlds, even if the worlds can’t be distinguished based on observation.
And that’s another “actionable difference”. I care about possible/counter-factual worlds only to the degree they can become actual. I don’t worry about potential multiple copies of me in the infinite universe, because “what if they are me?”, not until there is a measurable effect associated with it.
Heh, my intuition is the opposite. What I felt but so far refrained from saying today was “Stop arguing about whether reality exists or not! it doesn’t change anything.” It seems we agree on that at least.
It’s really about the accuracy of your model in terms of predictions it makes, whether or not we can find any correspondence between those hidden variables and other observables?
Is that what you’re getting at?
I don’t understand what you mean by hidden variables in this context.
It corresponds to appearance. Models posit causal mechanisms, and the wrong mechanism can predict the right observations.
In general, the correspondence theory of truth means that a proposition is true when reality, or some chunk of reality, is the way the proposition says it is. Translating that as directly as possible into physical science, a science, a theory would be true if it’s posits, the things it claims exist, actually exist. For instance, the phlogiston theory is true if something with the properties of phlogiston exists. The important thing is that correspondence in that sense, let’s say “correspondence of ontological content”, is not the same as predictive accuracy. To be sure, a theory would that is not empirically predictive is rejected as being ontological inaccurate as well.....but that does not mean empirical predictiveness is a sufficient criterion of ontological accuracy...we cannot say that a theory tells it like it is, just because it allows us to predict observations.
For one thing, instrumentalists and others who interpret science non realistically, still agree that theories are rendered true other false by evidence,
Another way of making this point is that basically wrong theories can be very accurate. For instance, the Ptolemaic system can be made as accurate as you want for generating predictions, by adding extra epicycles … although it is false, in the sense of lacking ontological accuracy, since epicycles don’t exist.
Another way, still, is to notice that theories with different ontologies can make equivalent predictions, like wave particle duality in physics.
The fourth way is based on sceptical hypotheses, such as Brain in a Vat and the Matrix. Sceptical hypotheses can be rejected, for instance by appeals to Occams Razor, but they cannot be refuted empirically, since any piece of empirical evidence is subject to sceptical interpretation. Occams’s Razor is not empirical
Science conceives of perception as based in causation, and causation as being comprised of chains of causes and effects, with only the ultimate effect, the sensation evoked in the observer, being directly accessible to the observer. The cause of the sensation, the other end of the causal chain, the thing observed, has to be inferred from the sensation, the ultimate effect—and it cannot be inferred uniquely, since, in general, more than one cause can produce the same effect. All illusions, from holograms to stage conjuring, work by producing the effect, the percept, in an unexpected way. A BIV or Matrix observer would assume that the precept of a horse is caused by a horse, but it would actually by a mad scientist pressing buttons.
A BIV or Matrix observer could come up with science that works, that is useful, for many purposes, so long as their virtual reality had some stable rules. They could infer that dropping an (apparent) brick onto their (apparent) foot would cause pain, and so on. It would be like the player of a computer game being skilled in the game. But the workability of their science is limited to relating apparent causes to apparent effects, not to grounding causes and effects in ultimate reality.
For example, uselessness.
Please forgive my continuation of the Socratic method, but what in what ways can a model be useless that differ from it not corresponding to reality?
Recall an old joke:
“Very clever! And you must be a manager,” says the guy in the field. “Amazing! How did you work it out?” asks the balloonist. “Well, there you are in your elevated position generating hot air, you have no idea where you are or what you’re doing, but somehow you’ve decided it’s my problem.”
Yep. Moral of the story: never let the twain meet :-)
It’s a funny joke but beside the point. Knowing that he is in a balloon about 30 feet above a field is actually very useful. It’s just useless to tell him what he clearly already knows.
Sorry I’m dense. What does this have to do with anything? It is true that the balloonist is in a hot air balloon 30 feet above a field. These are correct facts. Are you arguing for a concept of truth which would not qualify “Yes, you’re in a hot air balloon, about 30 feet above this field” to be a true statement?
I think Lumifer is suggesting that a model can correspond accurately to reality (e.g., representing the fact that X is in a hot air balloon 30 feet above Y’s current location) but none the less be useless (e.g., because all X wants to know is how to get to Vladivostok, and knowing he’s in a balloon 30 feet above Y doesn’t help with that). And that this is an example of how a model can be “bad” other than inaccurate correspondence with reality, which is what you were asking for a few comments upthread.
Indeed they are. That is, actually, the point.
Recall your own question (emphasis mine): “in what ways can a model be useless that differ from it not corresponding to reality?”
A model can be useful without corresponding, though,
The Ptolemaic system can be made as accurate as you want for generating predictions.
Do you really believe the word truth should be stricken? In a deep discussion, of course, the word doesn’t really get used since you should be arguing about either facts, methods, or concepts.
I don’t think the focus of this discussion is not quid est veritas? but a more pressing social question, “How can we have a discussion about truths, when we disagree about what makes a proposition true?”
Could you explain the pragmatic theory of truth a bit for the community?
P.S. I used to think that certain words were useless, until I decided/realized (through Wittgenstein) that I don’t get to decide such things (plus I am completely terrified that my English usage is insulated or inarticulate, so I try to use ordinary language as much as possible).
“There is nothing for the correspondence theory of truth to correspond to” is a feature, not a bug. Because this is one of those philosophical debates which is really just a choice of definition. “Something is true if it corresponds to reality” is just a definition, and definitions don’t have truth* value.
*truth defined in a way that I think is pretty useful to define it, which is what we’re usually looking for when we pick definitions.
As for your options, have you considered the possibility that 99% of people have never formulated a coherent philosophical view on the theory of truth?
Better make that 99.99%, including myself.
Let me give you the basic outlines of what I am thinking. It has been a gusher of explanation and clarity for me.
First consider the basic distinction around here between between epistemic rationality and instrumental rationality.
Epistemic Rationality:
But we recognize the broader skill of Instrumental Rationality:
Winning is about more than epistemic rationality, though epistemic rationality can be pretty dang handy.
Second, consider a Truth. What is it? At least some Truths are statements (don’t want to deal with algorithmic or model based truth today). Consider Truths as the winning statements, the statements that allow you to do something that “steer the future toward outcomes ranked higher in your preferences”. We do lots of things with statements. We repeat them. We use them to generate other statements. We agree or disagree with others who say them. And, sometimes, we use them to more accurately map the world. But only sometimes.
Third, consider the etymology of the word “probability”. On reading Ian Hacking’s book “The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference ”, I came on what seemed an odd fact. Once upon a time, probable wasn’t about frequencies or likelihood of events, it was about the standing, credibility, and authority of the speaker. We can interpret that as the way they identified the statements with the better numbers, but maybe that’s not what it meant to them—it really just meant a quality of the speaker, and instead, through time, people found that it was a more winning way to characterize the meaning in terms of frequency and likelihood. The criteria for choosing the statements you wanted changed.
Fourth, consider Haidt’s recent work on moral modalities. http://www.moralfoundations.org/
It’s interesting to think of these moral modalities as innate, as biological pattern recognizers that evolved and don’t need to be learned. The morality of something is how hard it pings these pattern recognizers, and there is wide variation in the pattern of pings between different people—people weight the different modalities very differently.
Finally, think of a Truth detector, a filter between the winning Truths and the losers, as another candidate for evolutionary development, similarly with different Truth modalities as pattern recognizers, and similarly with widely varying weights between people.
What kind of Truth modalities would you expect? Certainly, correspondence with reality would be a good one. But it’s not the only one. Spoken by those in power. Spoken by authorities. In consonance with the tribe. With parents. Quieting a disagreeable confusion.
It’s not that people have no conception of correspondence with reality. It’s that that pattern recognizer just doesn’t ping that loudly, and so is drown out by the others when they ping. Certainly, some of those other pattern recognizers don’t ping so loud for me, but seem to ping pretty loud for other people.
Years ago, arguing about God with a Christian girl I knew, she said something that just struck me as bizarre. “I just decided my life would be better if I believed in God.” What? What does that have to do with anything? That doesn’t make it true. That doesn’t mean it corresponds to reality.
It’s taken me decades to catch up with her. She seemed to have the idea of “Truth as winning statements”, but being a fanatic for “truth as correspondence”, I just didn’t get it.
PeerGynt
Stupid’s got nothing to do with it.
Are you so sure your preferred Truth modalities are better than theirs at winning? Probably, through most of human history, and even today, a dominant Correspondence to Reality modality was an evolutionary and personal loser.
I you would have thought a discussion of the nature of truth came under epistemic rationality.
See paragraph
Epistemically accurate statements are only a subset of winning statements. Actually, that’s only “some epistemically accurate statements”, as others are losers in some use contexts.
Indeed: Some winning statements aren’t true, so truth shouldn’t be casually equated with winning.
Not how I was using the term:
Paragraph 2:
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”? That seems...surprisingly plausible to me. It also seems like something one could test, with whatever it is scientists use to look at brain activity.
Hook a person up to a brain scanner. Give them true and false statements to evaluate. Also give them statements distinguished by, say, status of the speaker. Perhaps add Green/Blue coded statements if they’re of a political bent.
Then see if the same brain regions light up in each case.
That’s not how System 1 works in my experience. System 1 is only concerned with modeling of the world and making predictions, particularly of the results of various actions one might make. Its model however tends to be extremely primitive. Also System 2 doesn’t have direct access to the model, only the predictions. Furthermore, as far as System 1 is concerned making statements, or even having System 2 believe something, are actions whose consequences are to be predicted.
I am mostly a coherentist and get constantly tripped up by the correspondencist attitudes the sequences here take. So it may be a job for me. But beware, I am a sloppy arguer, I suck at being precise and exact, as I think in pictures which may be a good thing but the result is often “sorta-kinda y’know what I mean?” and useless for people who have a mathemethicians precise mind.
A) My main issue with correspondence theory is over-valuing the accuracy of observation, sensory experience etc. There is a hidden assumption that hypothesis-building or theories are far, far more inaccurate than observation. Eliezer frequently talks about just opening the box and looking, just opening your eyes and seeing, just checking etc. in short he has high confidence in observation being accurate.
B) I think observation is nothing but a lower-order theory/model/hypothesis. It can be just as inaccurate as theories. Quite literally: not only the conscious mind is affected by biases, but even the visual cortex.
C) The proress of science was retarded mostly by not having access to good enough observation instruments. You cannot really be a Galilei without a telescope. Bare-eye observation fails us all kind of ways, gives us a universe that is likely to be Ptolemaian.
D) But observation with instruments is just as problematic, as instruments can go wrong, can get miscalibrated, and ultimately they themselves rely on theory. You cannot build a LHC without already having lots of theoretical physics. Observation is fallible.
E) From this follows that you cannot simply match unreliable theories to unreliable observations and call it a day. You must also match observations to observations, theories to theories, and sometimes even observations to theories.
F) In other words, truth is whatever is coherent with the whole body of science, all observations and all theories cross-validating each other. One potentially faulty observation or ten does not a theory validate.
G) Quine: Two Dogmas, demonstrating how can only experimentally verify the whole body of science only, not any individual statement.
H) Data is Latin for “given”. Its etymology sounds like we are getting our data on fax from Heaven. In reality, data is anything but given. Data is gathered, mined through hard and fallible work.
I) The useful data is that gives diffs. I.e. when I am debugging a software I need not only data that says it fails in this case but also that it does not fail in that case. Mining these kinds of data is not easy, fallible, and relies on theory. I.e. I hunt for a diff only when I already have a hypothesis of what may be the cause of failure. As data is not given, you often need to form a hypothesis first and mine and gather data specifically to test it.
J) Objection: but in instrumental rationality, we want to change our sensory experience, so even if my observation of something being painful is wrong, if I made the pain go away I solved the problem, right? No. You are a doctor. A patient complains of stomach pain. You give painkillers. Pain goes away. A year later he dies of cancer. We cannot simply reduce instrumental rationality to felt, observed needs, and just assume the correctness of such assumption does not matter. Theory plays a role here. Knowing medical theory which gives a guess of cancer helps the doctor and the patient more than a very, very accurate observation of the pain.
Finally, let me quote from this Quine summary “Our beliefs form a web, with the outer fringes connecting to experience. Revision at the edges leads to revisions elsewhere in the web, but the decisions of where the revisions will occurs are underdetermined by the logical relations among the beliefs. In the revision, “any statement can be held true, come what may, if we make drastic enough changes elsewhere in the system”.
Of course, “Darwin” from “The Simple Truth” would point out that some of those drastic enough changes involve dying.
To simplify the whole thing, if truth is correspondence to reality, our data / observation of reality should be highly reliable meaning it should be “given” (Lat. datum) instead of mined and gathered through hard and fallible work, the direction of that work often determined by fallible hypotheses, relying on potentially faulty instruments, even worse bare-eye observation, bias in the visual cortex and symptoms of pragmatic problems often being quite far from their actual causes. And since it is not the case, we are better of matching everything with everything, not just theory with observation, and that grand system of cross-matching is called the body of science.
Scientists conduct experiments on individual statements, or small sets of them, all the time. Quine’s nihilistic conclusion that “any statement can be held true, come what may, if we make drastic enough changes elsewhere in the system” fails to grapple with the question of how knowledge can be achieved in the face of the fact that it is achieved. The words “drastic” and “come what may” should have been a clue to him, a small sensation of confusion, for these words admit there is something wrong with going to such lengths, while the sentence denies it.
I spent most of this morning making arrangements for a trip abroad. To this end I looked up hotels and railway timetables on various web sites, and at last made choices, decisions, bookings, and payments.
I expect the trains to depart and arrive at the times stated, my tickets to be accepted, and the hotel I have booked to exist and to be expecting me. What is the coherentist analysis of this situation? Why am I rightly confident in these arrangements, subject only to the realistic but fairly unlikely possibilities of engineering faults, the hotel burning down, and the like?
Correspondence (matching theories to observations) is a subset of coherence (matching everything with everything)
It is a very useful subset as long as observations are reliable and easy to procure, which is in your case, and indeed in most, but not all cases it is so.
A counter-example would be Many-Worlds: you cannot match it with observations, but you can match it with other theories and see it follows the pattern.
Your observations rest on definitions which come from other parts of your knowledge. Trains depart on time? Down to the nanosecond level or second level will be okay? Is 10 secs late still okay? Based on the starter timezone, even if it goes through multiple ones? If they base departure time on starter time zone and arrival on destination time zone, won’t it upset your expectation of the trip length? If not, can you miss a connection? What does ticket accepted mean, would presenting a false ticket and bribing the conductor would count as accepted? Would a well made false ticket that tricks all the conductors do? This is not nitpicking, it just means your observations are obvious because they rest on all kinds of non-conscious, tacit, consensual knowledge beyond that. And that is roughly what Quine meant: I can prove hardly any time train ever departs on time if I just make another change in the system, such as saying on time means nanosecond exactness. This is a change not worth making, of course.
Correspondence is not just matching theories to observation. It is matching theories to reality. Since we don’t have pure transcendent access to reality, this involves a lot of matching theories to observation and to each other, and rejecting the occasional observation as erroneous; however, the ultimate goal is different from that of coherence, since perfectly coherent sets of statements can still be wrong.
If your point is that “reality” is not a meaningful concept and we should write off the philosophizing of correspondence theorists and just focus on what they actually do, then what they actually do is identical to what coherentists actually do, not a subset.
As far as I can tell, most coherentists want to match theories with reality too, because truth doesn’t really have any other useful definition. The goal is not to be coherent within a random and reality-detached set of sentences: the goal is to be coherent with the whole of science. When a scientists rejects (assigns very low probability to) the observation of a perpetuum mobile on the basis that it contradicts the laws of physics, that is a standard coherentist move. This is another one. The goal is to avoid having to waste time and costs on non-fruitful data gathering. Ultimately the only thing that is rejected is that blind data-only approach that may be considered the straw-manning of the correspondenceist position, except that one is actually unfortunately used too much. A coherentist will simply not spend money buying an airplane ticket to check if someone’s garage has a dragon, the proposition contradicts so much we already know that the very low prior probability does not worth the cost. You may as well call this a wiser version of correspondencism, the barriers are not exactly black and white here. This is unfortunately philosophy, so fairly muddy :)
You’ve got coherentism confused with holism.
Is holism even a thing?
Yes. So is Google.
You can have holism without coherence where you require that the whole of science is true by correspondence, but the parts aren’t.. Inasmuch as it is correspondence, it isnt coherence.
The correspondence theory of truth is a theory of truth, not a theory of justification. Correspondentists don’t match theories to reality, since they don’t have direct ways of detecting a mismatch, they use proxies like observation sentences and predictions. Having justified the a theory as being true, they then use correspondence to explain what it’s truth consists of.
That’s what nitpicking is!
Changing the definition of the words in a sentence does not change the proposition that was originally expressed by the sentence. It just creates a different proposition expressed with the same words, and is irrelevant to understanding the original one.
I’d say there are three phenomena going on:
There are specific problems with correspondence theory—what do mathematical truths correspond to? moral truths? modal truths? vague truths? -- which either cause philosophers to reject correspondence theory in more classical domains, or cause them to adopt an ersatz theory where some truths depend on correspondence and some depend on other things. (This is approximately where I am, and arguably makes me a rejector of ‘correspondence theory,’ though I accept it in the mundane contexts LWers generally mean it.)
There are some people who really do just have silly views like ‘there’s nothing outside of our experiences’ or the more respectable ‘reality outside our experiences is ineffable’ (thanks to Berkeley, Kant, and various mystics). There is then a larger pool of people who don’t quite hold those beliefs, but believe they believe in them, or (more accurately) have positive associations with asserting those statements in arguments.
Most other people probably just have different interests/goals, and these are being misconstrued as a disagreement. Some people think it would be interesting or useful or elegant to define ‘truth’ in a way that makes us keep the Real World in mind; others think it would be interesting or useful or elegant to define ‘truth’ in a way that makes it reasonable proxy for ‘epistemic praiseworthiness’, so that even a systematically deceived agent could still be given a ‘truth’ ranking that matches its level of revealed virtue. These may both be good concepts to have a name for, and which one (if either) we call ‘truth’ is not so important.
Steel-manning group 1 means talking about the problem cases where correspondence theory seems to break down, while granting the general point that there is a universe outside my head and there’s some fashion in which the assertion-conditions for many of my ordinary statements depend on how the world is. A group 2 steel-man might look like an exotic simulation hypothesis—perhaps combined with rejecting Chalmers’ theory of skeptical hypotheses in The Matrix as Metaphysics. A group 3 steel-man would involve reasonably, explicitly discussing the pragmatic value of different conceptions of truth, relative to our linguistic needs and history.
Mathematical reality.
Can you be more specific? Is ‘2+2=4’ true in virtue of literal mathematical objects like ‘2’ and ‘4’? If so, how do those objects causally relate to my assertion that 2 and 2 makes 4, or to the evidence underlying that assertion?
Because they cause there to four apples in a box if you put two apples in, and then put two more apples in.
If both you and a sentient alien in another galaxy write out addition tables, the two tables will be highly correlated with each other (in fact they’ll correspond). Which means that either one caused the other, or both have a common cause. What’s the common cause, the laws of mathematics.
So maths is physics.
But I can write an equation for an inverse cube law of gravity, which doesn’t apply to this universe. What does it correspond to?
No, you can write out an equation using suggestively named variables like “G” and “m” and “r”. The second the equation stops modeling the strength of the gravitational force, however, it ceases to be a “law of gravity”, regardless of what letters you used for the variables. It’s just some random equation.
That amounts to saying that what isnt physically true isnt physically true . The point, however, is that what is not physically true can be mathematically true, so mathematical truth cannot consist of correspondence to the physical world,
Not quite, although I agree the approach I describe also applies to establish that the laws of physics exist.
Yes, and if you and the alien both write down a cube law and predict what orbits would be like in a universe where it were true, you would reach the same conclusions.
That doesn’t establish that mathematics is true by correspondence,.
So what would you describe as the cause of the correlation in the orbits calculated by myself and the alien?
Running off the same axioms and references rules).
In a sense that means the same laws, but the laws are not independently existing entities that mathematical truths correspond to.
In the Philip K. Dick sense they are.
In the PKD sense, they are not, because finitists and constructivists adopt different axioms able get different results,
I reject the correspondence theory of truth (at least what philosopher’s call the “correspondence theory”, which I think has certain important differences from the view Eliezer subscribes to).
I started out writing a description of my views in a comment, but it ended up being way too long, so I made it a separate post. Here it is.
On the other hand, I am also basically in agreement with Megan McArdle’s point of view. Despite maintaining what I said about people’s relationship with reality, I think that particular arguments and positions that people hold usually do have some worth, and that you can always get something out of it if you try. Any discussion will go much better you take what people say and begin with part of it that you agree with—and you should always be able to find something that you agree with—rather than jumping immediately to the parts you disagree with. This makes the whole difference between a friendly discussion and an angry argument, and the former is much more productive for everyone than the latter.
I think you need to distinguish between rejecting the correspondence theory (wholly) , and rejecting the correspondence-only approach in favour of something more multifaceted. I’m happily in the latter camp, FYI.
I’m not sure succeeding at number 4 helps you with with the unattractiveness and discomfort of number 3.
Say you do find some alternative steel-manned position on truth that is comfortable and intellectually satisfying. What are the odds that this position will be the same position as that held by “most humans”, or that understanding it will help you get along with them ?
Regardless of the concept of truth you arrive at, you’re still faced with the challenge of having to interact with people who have not-well-thought-out concepts of truth in a way that is polite, ethical, and (ideally) subtly helpful.
I’d love to hear a more qualified academic philosopher discuss this, but I’ll try. It’s not that the other theories are intuitively appealing, it’s that the correspondence theory of truth has a number of problems, such as the problem of induction.
Let’s say the one day we create a complete simulation of a universe where the physics almost completely match ours, except some minor details, such as that some specific types of elementary particles, e.g. neutrinos are never allowed to appear. Suppose that there are scientists in the simulation, and they work out the Standard Model of their physics. The model presupposes existence of neutrinos, but their measurement devices are never going to interact with a neutrino. Is the statement “neutrinos exist” true or false from their point of view? I’d say that the answer is “does not matter”. To turn the example around, can we be sure that aether does not exist? Under Bayesianism, every instance of scientists not observing aether increases our confidence. However we might be living in a simulation where the simulators have restricted all observations that could reveal the existence of aether. So it cannot be completely excluded that aether exists, but is unobservable. So the correspondence theory is forced to admit that “aether exists” has an unknown truth value. In contrast, a pragmatic theory of truth can simply say that anything that cannot, in principle, be observed by any means also does not exist, and be fine with that.
Ultimately, the correspondence theory presupposes a deep Platonism as it relies on the Platonic notion of Truth being “somewhere out there”. It renders science vulnerable to problem of induction (which is not a real problem as far as real world is concerned) - it allows anyone to dismiss the scientific method off-handedly by saying that “yeah, but science cannot really arrive at the Truth—already David Hume proved so!”
We have somehow to deal with the possibility that everything we believe might turn out to be wrong (e.g. we are living in a simulation, and the real world has completely different laws of physics). Accepting correspondence theory means accepting that we are not capable of reaching truth, and that we are not even capable of knowing if we’re moving in the direction of truth! (As our observations might give misleading results.) A kind of philosophical paralysis, which is solved by the pragmatic theory of truth.
There’s also the problem that categories really do not exist in some strictly delineated sense; at least not in natural languages. For example consider the sentences in form “X is a horse”. According to correspondence, a sentence from this set is true iff X is a horse. That seems to imply that X must be a mammal of genus Equus etc. - something with flesh and bones. However, one can point to a picture of a horse and say “this is a horse”, and would not normally be considered lying. Wittgenstein’s concept of family resemblance comes to rescue, but I suspect does not play nicely with the correspondence theory.
Finally, there’s a problem with truth in formal systems. Some problems in some formal systems are known to be unsolvable; what is the truth value of statements that expand to such a problem? For example, consider the formula G (from Goedel’s incompleteness theorem) expressed in Peano Arithmetic. Intuitively, G is true. Formally, it is possible to prove that assuming G is true does not lead to inconsistencies. To do that, we can provide a model of Peano Arithmetic using this standard interpretation. The standard set of integers is an example of such a model. However, it is also possible to construct nonstandard models of Peano Arithmetic extended with negation of G as an axiom. So assuming that negation of G is true also does not lead to contradictions. So we’re back at the starting point—is G true? Goedel thought so, but he was a mathematical Platonist, and his views on this matter are largely discredited by now. Most do not believe that G has a truth value is some absolute sense.
This aspect together with Tarki’s undefinability theorem suggest that is might not make sense to talk about unified mathematical Truth. Of course, formal systems are not the same as the real world, but the difficulty of formalizing truth in the former increases my suspicion of formalizations / axiomatic explanations relevant to in the latter.
The common-sense version of truth is “telling it like it is”. The big problem with this notion is that reality makes it hard to properly perceive truth. Figuring out the laws of reality is an arduous task that took thousands of years, required a level of civilization far removed from our ancestral environment, and is still far from complete. Survival means using a number of shortcuts and heuristics to get a good enough approximation. This leads to a pragmatic notion of truth where “correspondence to reality” is a goal insofar as it has a practical benefit. The ideal Platonic truth takes a backseat.
For example, determining whether God exists is a rather unimportant academic point (not to mention being impossible to decide one way or the other). Evaluating the social utility of religion, on the other hand, is a lot more important. If a religion is good enough to benefit the group, that truth is much more important than the literal truth of its dogma. Many cults are “true” in the sense that they provide a community and sense of purpose for people who were otherwise unfulfilled.
Your optimism is heroic. I also don’t think we should accept that the rest of humanity are morons, going to hell in a hand-basket. I subscribe to a bit from Cormac McCarthy’s No Country For Old Men, “I think the truth is always simple. It has pretty much got to be. It needs to be simple enough for a child to understand. Otherwise it’d be too late. By the time you figured it out it would be too late.” I also think the main fundamentals of reality have to be like that. If they weren’t we’d just have to despair over our fellow man.
Your understanding about academic philosophy is right on the mark; the coherence camp would not accept that a sentence can refer to mind-independent facts about the world. And Cormac’s character Sheriff Ed Tom Bell has something to say about that too:
I think one the most important propositions that is too often ignored in today’s society is that reality is knowable, that discussion can be rational and not emotional, that opinions can be informed rather than uninformed, that arguments can be passionate without being belligerent, and that discomfort is a small price to pay for a little more truth in our lives.
Basically, if people don’t believe in heartily defending the truth, then they are only ever paying lip service to the correspondence theory.
Statements contain primary, secondary, tertiary, and quatnary meanings and nuances.
There’s a simple example of a case where the secondary meaning supersedes what I’m temporarily calling the primary meaning. The above statement is incorrect in a primary sense (in truth, they believe they understand perfectly well), and correct in a secondary sense of what they’re really trying to convey. The user is operating in the secondary sense when they speak.
But here’s the fun part—people usually don’t know which sense they are operating in. They operate in all of them simultaneously. They bleed over, too—sometimes, if they say something in a secondary sense, they will fool themselves into believing the primary sense, and so on.
It seems like they’re not operating on connection to reality, but here is usually at least some level at which a statement implies a belief about reality, but it’s not always at the primary level.
For most examples of where someone makes a statement clearly at odds with (to you) obvious truth, it’s generally also possible to figure out a certain prediction they have about reality—it’s just often hidden under layers of meaning and not explicitly made.
Right, exactly. Except that “something else” is also part of reality. So you are discussing reality, still, it’s just conflated. Something is being lost in translation.
The thing is, not everyone conflates the same way, and some conflate more than others, so we are often bad at figuring out what is really being discussed, and that’s where part of the divide and misunderstandings comes from.