“The conceivability of being wrong” and “perspective-taking on beliefs” are old indeed; I wouldn’t be the least bit surprised to find explicit precedent in Ancient Greece.
Skill 3 in the form “Trust not those who claim there is no truth” is widely advocated by modern skeptics fighting anti-epistemology.
Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski’s original truth-schemas.)
“The conceivability of being wrong” aka “Consider the opposite” is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).
“The conceivability of being wrong” and “perspective-taking on beliefs” are old indeed; I wouldn’t be the least bit surprised to find explicit precedent in Ancient Greece.
The most famous expression of this that I’m aware of originates with Lord Cromwell:
I beseech you, in the bowels of Christ, think it possible you may be mistaken.
Arguably, Socrates’s claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I’m not a good enough classical scholar to identify anything closer.
The grid-visualization method / Litany of Tarski was invented on LW as far as I currently know.
The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent’s play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.
[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn’t.
That sounds like a good idea in two ways:
It gives you practice at visualizing the alternatives (which is always good if it can be honed to greater availability/reflex by practice),
and by choosing those specific situations, you are automatically providing real-world examples in which to apply it; that way, it is a practical skill.
The intent seems different there, and that shapes the details. Pascal’s wager isn’t about how you act because of your beliefs—the belief is considered to be the action, and the outcomes are declared by fiat (or perhaps, fide) at the start of the problem, rather than modeled in your head as part of the purpose of the exercise.
The Litany of Tarski has connections to certain versions of the direction-of-fit model of beliefs and desires. The model is usually considered a descriptive attempt at cashing out the difference between the functional role played by beliefs and desires. Both beliefs and desires are intentional states, they have propositional content (we believe that p, we desire that p). According to the direction-of-fit model, the crucial difference between beliefs and desires is the relation between the content of these states and the world—specifically, the direction of fit between the content and the world differs. In the case of beliefs, subjects try to fit the content to the world, whereas in the case of desires, subjects try to fit the world to the content.
However, some philosophers treat the direction-of-fit model not as descriptive but as normative. The model tells us that the representational contents of our beliefs and desires should be kept rigorously separate (don’t let your conception of how the world is be contaminated by your conception of how you would like it to be) and that we should have different attitudes to the contents of these mental states. Here’s Mark Platts, from his book Ways of Meaning:
Beliefs aim at being true, and their being true is their fitting the world; falsity is a decisive failing in a belief, and false beliefs should be discarded; beliefs should be changed to fit with the world, not vice versa. Desires aim at realization, and their realization is the world fitting with them; the fact that the indicative content of a desire is not realized is not yet a failing in the desire, and not yet any reason to discard the desire; the world, crudely, should be changed to fit with our desires, and not vice versa.
Also related (but not referring to the map/territory distinction as explicitly) is what Ken Binmore calls “Aesop’s principle” (in reference to the fable in which a fox who cannot reach some grapes decides that the grapes must be sour). From his book Rational Decisions:
[An agent’s] preferences, her beliefs, and her assessments of what is feasible should all be independent of each other.
For example, the kind of pessimism that might make [the agent] predict that it is bound to rain now that she has lost her umbrella is irrational. Equally irrational is the kind of optimism that Voltaire was mocking when he said that if God didn’t exit, it would be necessary to invent Him.
I should note that Binmore is talking about terminal preferences here. Of course, instrumental preferences need not (indeed, should not) be independent of our beliefs about the world and our assessments of what is feasible.
As someone else engaged with mainstream philosophy, I’d like to mention that I personally think that direction of fit is one of the biggest red herrings in modern philosophy. It’s pretty much just an unhelpful metaphor. Just sayin’.
I never saw it as a real ‘model’, just a way of clarifying definitions, and making statements such as “I believe that {anything not a matter of fact}” null. It provides a way to distinguish between “I don’t believe in invisible dragons in my basement.” and “I don’t believe in {immoral action}”. I suspect the original intention was to validate a philosopher who got fed up with someone who hid behind ‘I don’t believe in that’ in a discussion, after which the philosopher responded with evidence that the subject under discussion was factual.
It’s really not my area at all, so I don’t really have any well-developed opinions on this. My comment wasn’t meant to be an endorsement of the model, I was just pointing out a similarity with a view in the mainstream literature. From a pretty uninformed perspective, it does seem to me that the direction-to-fit thing doesn’t really get at what’s important about the distinct functional roles of belief and desire, so I’m inclined to agree with your assessment.
Yeah, I did realise that you weren’t necessarily supporting it, I just wanted to make it clear that it’s not orthodoxy in mainstream philosophy! Sorry if it came off as a bit critical.
What we really believe feels like the way the world is; from the inside, other people feel like they are inhabiting different worlds from you.
In psychology, this is called construal. A person’s beliefs, emotions, behaviors, etc. depend on their construal (understanding/interpretation) of the world.
Mainstream status:
“The conceivability of being wrong” and “perspective-taking on beliefs” are old indeed; I wouldn’t be the least bit surprised to find explicit precedent in Ancient Greece.
Skill 3 in the form “Trust not those who claim there is no truth” is widely advocated by modern skeptics fighting anti-epistemology.
Payoff matrices as used in the grid-visualization method are ancient; using the grid-visualization method in response to a temptation to rationalize was invented on LW as far as I currently know, as was the Litany of Tarski. (Not to be confused with Alfred Tarski’s original truth-schemas.)
“The conceivability of being wrong” aka “Consider the opposite” is the standard recommended debiasing technique in psychology. See e.g. Larrick (2004).
The most famous expression of this that I’m aware of originates with Lord Cromwell:
Arguably, Socrates’s claims of ignorance are a precursor, but they may stray dangerously close to anti-epistemology. I’m not a good enough classical scholar to identify anything closer.
The grid-visualization method seems like a relatively straightforward application of the normal-form game, with your beliefs as your play and the state of the world as your opponent’s play. The advocacy to visualize it might come from LW, but actually applying game theory to life has a (somewhat) long and storied tradition.
[edit] I agree that doing it in response to a temptation to rationalize is probably new to LW; doing it in response to uncertainty in general isn’t.
I’ve seen it before used in the treatment of pascals wager: Believe in god x god exists = heavan, believe in god x god not exists = wasted life.… etc.
Can’t cite specific texts, but it was definately pre-LW for me, from people who had not heard of LW.
Ah yes, sorry. Payoff matrices are ancient; the Tarski Method is visualizing one in response to a temptation to rationalize. Edited.
That sounds like a good idea in two ways: It gives you practice at visualizing the alternatives (which is always good if it can be honed to greater availability/reflex by practice), and by choosing those specific situations, you are automatically providing real-world examples in which to apply it; that way, it is a practical skill.
The intent seems different there, and that shapes the details. Pascal’s wager isn’t about how you act because of your beliefs—the belief is considered to be the action, and the outcomes are declared by fiat (or perhaps, fide) at the start of the problem, rather than modeled in your head as part of the purpose of the exercise.
The Litany of Tarski has connections to certain versions of the direction-of-fit model of beliefs and desires. The model is usually considered a descriptive attempt at cashing out the difference between the functional role played by beliefs and desires. Both beliefs and desires are intentional states, they have propositional content (we believe that p, we desire that p). According to the direction-of-fit model, the crucial difference between beliefs and desires is the relation between the content of these states and the world—specifically, the direction of fit between the content and the world differs. In the case of beliefs, subjects try to fit the content to the world, whereas in the case of desires, subjects try to fit the world to the content.
However, some philosophers treat the direction-of-fit model not as descriptive but as normative. The model tells us that the representational contents of our beliefs and desires should be kept rigorously separate (don’t let your conception of how the world is be contaminated by your conception of how you would like it to be) and that we should have different attitudes to the contents of these mental states. Here’s Mark Platts, from his book Ways of Meaning:
Also related (but not referring to the map/territory distinction as explicitly) is what Ken Binmore calls “Aesop’s principle” (in reference to the fable in which a fox who cannot reach some grapes decides that the grapes must be sour). From his book Rational Decisions:
I should note that Binmore is talking about terminal preferences here. Of course, instrumental preferences need not (indeed, should not) be independent of our beliefs about the world and our assessments of what is feasible.
As someone else engaged with mainstream philosophy, I’d like to mention that I personally think that direction of fit is one of the biggest red herrings in modern philosophy. It’s pretty much just an unhelpful metaphor. Just sayin’.
I never saw it as a real ‘model’, just a way of clarifying definitions, and making statements such as “I believe that {anything not a matter of fact}” null. It provides a way to distinguish between “I don’t believe in invisible dragons in my basement.” and “I don’t believe in {immoral action}”. I suspect the original intention was to validate a philosopher who got fed up with someone who hid behind ‘I don’t believe in that’ in a discussion, after which the philosopher responded with evidence that the subject under discussion was factual.
It’s really not my area at all, so I don’t really have any well-developed opinions on this. My comment wasn’t meant to be an endorsement of the model, I was just pointing out a similarity with a view in the mainstream literature. From a pretty uninformed perspective, it does seem to me that the direction-to-fit thing doesn’t really get at what’s important about the distinct functional roles of belief and desire, so I’m inclined to agree with your assessment.
Yeah, I did realise that you weren’t necessarily supporting it, I just wanted to make it clear that it’s not orthodoxy in mainstream philosophy! Sorry if it came off as a bit critical.
In psychology, this is called construal. A person’s beliefs, emotions, behaviors, etc. depend on their construal (understanding/interpretation) of the world.
Some versions of cognitive behavioral therapy ask you to write down the pros and cons of holding a particular belief.