BBDD: I’m increasingly of the opinion that truth as correspondence to reality is a minority orientation. PeerGynt: I’ve spent a lot of energy over the last couple of days trying to come to terms with the implications of this sentence.
Let me give you the basic outlines of what I am thinking. It has been a gusher of explanation and clarity for me.
First consider the basic distinction around here between between epistemic rationality and instrumental rationality.
Epistemic Rationality:
The art of obtaining beliefs that correspond to reality as closely as possible.
But we recognize the broader skill of Instrumental Rationality:
The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as “winning”.
Winning is about more than epistemic rationality, though epistemic rationality can be pretty dang handy.
Second, consider a Truth. What is it? At least some Truths are statements (don’t want to deal with algorithmic or model based truth today). Consider Truths as the winning statements, the statements that allow you to do something that “steer the future toward outcomes ranked higher in your preferences”. We do lots of things with statements. We repeat them. We use them to generate other statements. We agree or disagree with others who say them. And, sometimes, we use them to more accurately map the world. But only sometimes.
Third, consider the etymology of the word “probability”. On reading Ian Hacking’s book “The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference ”, I came on what seemed an odd fact. Once upon a time, probable wasn’t about frequencies or likelihood of events, it was about the standing, credibility, and authority of the speaker. We can interpret that as the way they identified the statements with the better numbers, but maybe that’s not what it meant to them—it really just meant a quality of the speaker, and instead, through time, people found that it was a more winning way to characterize the meaning in terms of frequency and likelihood. The criteria for choosing the statements you wanted changed.
the theory proposes that several innate and universally available psychological systems are the foundations of “intuitive ethics… The five foundations for which we think the evidence is best are: Morality has numerous dimensions Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, Sanctity/degradation, We think there are several other very good candidates for “foundationhood,” especially: Liberty/oppression
It’s interesting to think of these moral modalities as innate, as biological pattern recognizers that evolved and don’t need to be learned. The morality of something is how hard it pings these pattern recognizers, and there is wide variation in the pattern of pings between different people—people weight the different modalities very differently.
Finally, think of a Truth detector, a filter between the winning Truths and the losers, as another candidate for evolutionary development, similarly with different Truth modalities as pattern recognizers, and similarly with widely varying weights between people.
What kind of Truth modalities would you expect? Certainly, correspondence with reality would be a good one. But it’s not the only one. Spoken by those in power. Spoken by authorities. In consonance with the tribe. With parents. Quieting a disagreeable confusion.
It’s not that people have no conception of correspondence with reality. It’s that that pattern recognizer just doesn’t ping that loudly, and so is drown out by the others when they ping. Certainly, some of those other pattern recognizers don’t ping so loud for me, but seem to ping pretty loud for other people.
Years ago, arguing about God with a Christian girl I knew, she said something that just struck me as bizarre. “I just decided my life would be better if I believed in God.” What? What does that have to do with anything? That doesn’t make it true. That doesn’t mean it corresponds to reality.
It’s taken me decades to catch up with her. She seemed to have the idea of “Truth as winning statements”, but being a fanatic for “truth as correspondence”, I just didn’t get it.
In other words, if I truly believed this, I would label most people as being too stupid to have a real discussion with.
Stupid’s got nothing to do with it.
Are you so sure your preferred Truth modalities are better than theirs at winning? Probably, through most of human history, and even today, a dominant Correspondence to Reality modality was an evolutionary and personal loser.
Epistemically accurate statements are only a subset of winning statements. Actually, that’s only “some epistemically accurate statements”, as others are losers in some use contexts.
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”? That seems...surprisingly plausible to me. It also seems like something one could test, with whatever it is scientists use to look at brain activity.
Hook a person up to a brain scanner. Give them true and false statements to evaluate. Also give them statements distinguished by, say, status of the speaker. Perhaps add Green/Blue coded statements if they’re of a political bent.
Then see if the same brain regions light up in each case.
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”?
That’s not how System 1 works in my experience. System 1 is only concerned with modeling of the world and making predictions, particularly of the results of various actions one might make. Its model however tends to be extremely primitive. Also System 2 doesn’t have direct access to the model, only the predictions. Furthermore, as far as System 1 is concerned making statements, or even having System 2 believe something, are actions whose consequences are to be predicted.
Let me give you the basic outlines of what I am thinking. It has been a gusher of explanation and clarity for me.
First consider the basic distinction around here between between epistemic rationality and instrumental rationality.
Epistemic Rationality:
But we recognize the broader skill of Instrumental Rationality:
Winning is about more than epistemic rationality, though epistemic rationality can be pretty dang handy.
Second, consider a Truth. What is it? At least some Truths are statements (don’t want to deal with algorithmic or model based truth today). Consider Truths as the winning statements, the statements that allow you to do something that “steer the future toward outcomes ranked higher in your preferences”. We do lots of things with statements. We repeat them. We use them to generate other statements. We agree or disagree with others who say them. And, sometimes, we use them to more accurately map the world. But only sometimes.
Third, consider the etymology of the word “probability”. On reading Ian Hacking’s book “The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference ”, I came on what seemed an odd fact. Once upon a time, probable wasn’t about frequencies or likelihood of events, it was about the standing, credibility, and authority of the speaker. We can interpret that as the way they identified the statements with the better numbers, but maybe that’s not what it meant to them—it really just meant a quality of the speaker, and instead, through time, people found that it was a more winning way to characterize the meaning in terms of frequency and likelihood. The criteria for choosing the statements you wanted changed.
Fourth, consider Haidt’s recent work on moral modalities. http://www.moralfoundations.org/
It’s interesting to think of these moral modalities as innate, as biological pattern recognizers that evolved and don’t need to be learned. The morality of something is how hard it pings these pattern recognizers, and there is wide variation in the pattern of pings between different people—people weight the different modalities very differently.
Finally, think of a Truth detector, a filter between the winning Truths and the losers, as another candidate for evolutionary development, similarly with different Truth modalities as pattern recognizers, and similarly with widely varying weights between people.
What kind of Truth modalities would you expect? Certainly, correspondence with reality would be a good one. But it’s not the only one. Spoken by those in power. Spoken by authorities. In consonance with the tribe. With parents. Quieting a disagreeable confusion.
It’s not that people have no conception of correspondence with reality. It’s that that pattern recognizer just doesn’t ping that loudly, and so is drown out by the others when they ping. Certainly, some of those other pattern recognizers don’t ping so loud for me, but seem to ping pretty loud for other people.
Years ago, arguing about God with a Christian girl I knew, she said something that just struck me as bizarre. “I just decided my life would be better if I believed in God.” What? What does that have to do with anything? That doesn’t make it true. That doesn’t mean it corresponds to reality.
It’s taken me decades to catch up with her. She seemed to have the idea of “Truth as winning statements”, but being a fanatic for “truth as correspondence”, I just didn’t get it.
PeerGynt
Stupid’s got nothing to do with it.
Are you so sure your preferred Truth modalities are better than theirs at winning? Probably, through most of human history, and even today, a dominant Correspondence to Reality modality was an evolutionary and personal loser.
I you would have thought a discussion of the nature of truth came under epistemic rationality.
See paragraph
Epistemically accurate statements are only a subset of winning statements. Actually, that’s only “some epistemically accurate statements”, as others are losers in some use contexts.
Indeed: Some winning statements aren’t true, so truth shouldn’t be casually equated with winning.
Not how I was using the term:
Paragraph 2:
Would an accurate summary of this be “humans have a generic, intuitive, System 1 Truth-detector that does not distinguish between reality-correspondence, agreeability, tribal signaling, etc, but just assigns +1 Abstract Truth Weight to all of them; distinguishing between the different things that trip this detector is a System 2 operation”? That seems...surprisingly plausible to me. It also seems like something one could test, with whatever it is scientists use to look at brain activity.
Hook a person up to a brain scanner. Give them true and false statements to evaluate. Also give them statements distinguished by, say, status of the speaker. Perhaps add Green/Blue coded statements if they’re of a political bent.
Then see if the same brain regions light up in each case.
That’s not how System 1 works in my experience. System 1 is only concerned with modeling of the world and making predictions, particularly of the results of various actions one might make. Its model however tends to be extremely primitive. Also System 2 doesn’t have direct access to the model, only the predictions. Furthermore, as far as System 1 is concerned making statements, or even having System 2 believe something, are actions whose consequences are to be predicted.