I believe it’s doublethink
This is my attempt to provide examples and a summarised view of the posts on “Against Doublethink” on the page How To Actually Change Your Mind.
What You Should Believe
Lets assume I am sitting down with my friend John and we each have incomplete and potentially inaccurate maps of a local mountain. When John says “My map has a bridge at grid reference 234567”, I should add a note to my map saying “John’s map has a bridge at grid reference 234567″ *not* actually add the bridge to my map.
The same is true of beliefs. If Sarah tells me “the sky is green” I should, assuming she is not lying, add to my set of beliefs “Sarah believes the sky is green”. What happens too often is that we directly add “The sky is green” to our beliefs. It is an overactive optimisation that works in most cases but causes occasional problems.
Taking the analogy a step further we can decide to question John about why he has drawn the bridge on his map. Then, depending on the reason, we can choose to draw the bridge on our map or not.
We can give our beliefs the same treatment. Upon asking Sarah why she believed the sky is green, if she said “someone told me” and couldn’t provide further information I wouldn’t choose to believe it. If, however, she said “I have seen it for myself” then I may choose to believe it, depending on my priors.
I Believe You Believe
The curious case is when someone says “I believe X”. This can be meant a few ways:
I have low confidence in this belief. e.g. “I believe that my friend Bob’s eyes are hazel, but I’m not sure”.
I have this belief but have reasons to think you wont share it. e.g. “I believe she is attractive”.
I have the fact ‘I believe the sky is green’ in my mental model of the world. e.g. “I believe god exists.”
The first case I do not have a problem with. It means your probability density has not yet shown a clear winner but you are giving me the answer that is in the lead at the moment. In this case I should add a note saying “You believes there is a bridge here, you are not very confident in the belief”.
I don’t have a problem with the second case either. I can have the belief “Angelina Jolie is attractive”, someone else not have that belief, and we both be rational. This is because we are using different criteria for attractive. If I were to change it to a consistent definition of attractive it wouldn’t be a problem e.g. The phrase “Angelina Jolie is regularly voted in the top 100 most attractive people in the world” doesn’t require the phrase ‘I believe...’.
The last case is even more curious. Lets assume that John (from our first example) says “I believe there is a bridge at grid reference 234567” but means it in the third case. I should add a note to my map saying “John has the following note on his map: ‘I believe there is a bridge at grid reference 234567’”. You would hope that the reason he has that note is because there is actually a bridge on his map. Unfortunately people are not that rational. You can have a cached belief that says “I believe X” even if you do not have “X” as a belief. By querying why they have that belief you should be able to work out if you should believe it, or even if they should.
To use the example from religion you can have the belief “I believe god exists” even if you do not have the belief “god exists”.
Recommendations
I’m going to put myself on the line and give some recommendations:
When we are told or recite a fact, try to remember why it is or was added. The reason will often be poor.
When telling others facts, tell them the reason you believe it, e.g. say “I think there is a bridge here because I overheard someone talking about it”. This should help you weed out cached beliefs in yourself and give the other person a better metric for adding to the own beliefs.
When being told something, ask them why they have the belief, it also helps if you recite it back to them as if you are trying to understand, for example: “I see. You think there is a bridge here. Why do you think that?”.
When we hear “I believe” or “I think” try to classify the statement as one of the three options above.
In that case, you would mean that you find her attractive. That is not a belief. I would then conclude that you believe that you find her attractive, and, unless there is something very wrong with you, I would conclude that you find her attractive.
If you believe that her attractiveness is a property of her, rather than a property of you, then you have a different problem.
I agree with you and your reasoning on what you should believe following that. Yet still, I find myself saying to people, “I believe that …” to emphasise that it is my belief. Maybe I am wrong in doing this but it appears to help people understand that “she is attractive” is not a property of her. I guess I could just make it explicit another way by saying “I find her attractive”.
I will concede that it is not the most sensible use of the phrase “I believe” but people will still use it and it will remain helpful to have it as one of the buckets we can separate uses of that phrase into.
Generally people on Less Wrong use “belief” to refer to objective beliefs rather than opinions, as it seems to be a better way to carve reality reality along its joints. There’s no reason to assume that other people will adhere to this convention though.
So for a particular statement X, a rationalist will put it into one of two categories:
A mixture of 1 and 3 for different people: some people expressing belief in X have a low confidence, others believe they believe X
2: people expressing belief in X are expressing an opinion
The first is for facty statements, the second for subjective ones.
“opinion” is itself a fuzzily defined word. In this case it think you mean “opinions” in the sense of subjective tastes/preferences/likings—not in the sense of different beliefs about the truth-value of factual statements.
Yep—thanks for clarifying.
Or you could think of it as both. You could say ‘she is attractive_me’ or define attractive(attracted person, attracting person) as being true/false, and then say that attractive(me, her) == true. Agreed, you can’t have ‘she is attractive’, with ‘attractive’ denoting possession of a particular set of traits that remains constant regardless of whoever’s using the word.
But, yeah, I’m pretty sure that I agree with what you’re saying in general.
ETA: “whoever’s”
I’m not convinced that this is either workable or desirable. Not workable because it would soon become unwieldy trying to remember the network of sources from which our beliefs originate. Not desirable because it would lead to us judging the reliability of a belief by looking at the reliability of the sources, rather than by looking for some independent confirmation or refutation. Hence we would be more likely to accept something that ‘everybody knows’ because more sources seem to lead to higher probability.
Also, what do we do when we find that there isn’t a bridge at 234567. Do we go back to each of our sources and try to persuade them that they are wrong (and that they should similarly notify their sources and anyone else that they have passed the information on to). Isn’t it better to make a general announcement: Contrary to what is believed there is no bridge at 234567 (and to add the evidence to support this assertion)
People aren’t going to remember a large network of inferences, true. But I suspect that reminding oneself of these considerations could be most useful and productive.
The Dutch language has different verbs for “I think X, and X is an objective belief” (case “I believe that my friend Bob’s eyes are hazel”) and for “I think X, and X is a subjective belief” (case “I believe she is attractive”).
This is good, but I have one qualification for the belief in belief case (number 3).
Something that doesn’t come up much in map/territory discussions is the fact that the territory has no concept of a map. “Maps” only exist on other maps; the territory just contains gooey neuron stuff.
So if John says “I believe X”, we may be tempted to ask whether John really believes X, or whether he just believes that he believes X. But this is just asking whether an object is a blegg or a rube.
What we’re actually observing is:
John expresses belief X
John says and does some things that appear consistent with a belief in X, and others that aren’t
John is therefore exhibiting compartmentalization
To a greater or lesser extent, this may fit the “belief in belief” pattern.
I have at least a casual tag for a lot of what I know, even if it’s no better than “I read it somewhere on line”—at that point, at least I know I haven’t checked it farther.
There’s another advantage to tagging with “X believes”, though more exactly, it should be “I remember X said they believe”—it can slow down the jump from “X believes thing not in my map” to “X is a fool”. This is implied in your point 3 at the end, but I thought it should be made explicit.
I can’t imagine having such database in a human mind. For every fact you would have to remember who told you about it (sometimes multiple people) and maybe also when they told it etc., so for each fact you would need something like a Wikipedia “talk page”.
It’s the fundamental question of rationality: What do you believe, and why do you believe it?
I’m honestly curious. Think of a fact, and then ask yourself why you know it. Out of 5 attempts how many did you actually have no idea why that fact is there.
I would expect if I were to ask people why do you think daffodil flowers need lots of water they would at least say something like, oh I heard it somewhere (assuming that the do indeed believe this). From this I would choose to shift my belief only very very slightly.
It’s worth being cautious here: just because a brain can generate an answer to a question doesn’t mean that it was actually storing that information. “I heard it somewhere” may just be the default response when no supporting evidence can be found.
But your examples here are valid—sometimes we really do remember X separately from the evidence in support of X. And if X is something important this is probably to be encouraged.
This post reminds me of evidential markers in linguistics (http://en.wikipedia.org/wiki/Evidentiality). Evidential markers are morphemes (e.g. prefixes or suffixes) which, when attached to the verb, describe how the speaker came to belief the fact that he is asserting. These can include things like direct knowledge (“I saw it with my own eyes”), hearsay (“Somebody told me so but I didn’t see for myself”), and inference (“I concluded it from my other beliefs”). While evidential markers are less specific than what’s described in this post (“Somebody told me” rather than “John told me last Thursday at lunch”), I suspect that speakers of languages with evidential markers would be a lot more inclined to remember the more specific details.*
Does anyone here speak a language with evidential markers? If so, what do you think of the claim (asserted in at least four separate comments here) that these things would be far too difficult to remember and keep track of?
*I suspect this because I’ve read some articles about languages which use absolute direction (north, south, east, west) instead of subjective direction (left, right, in front of, behind); speakers of these languages develop very good internal compasses and always know which direction they’re facing. (Here I’m assuming this is due to nurture rather than nature.) If language can cause people to acquire such a skill, it doesn’t seem unreasonable that language could also cause people to acquire a talent for remembering sources of information.
I’m recently new to the field of computer programming, but there’s a metaphor that I find to be rather apt. Instead of passing by value, saying “X is true”… you can pass by reference, saying “Location Y says that X is true”. Of course, passing by const reference would be better, but our machinery is corrupted.
Don’t forget that a lot of “I believe...” statements are really about associating with something rather than actual belief. Before you can sort an “I believe” statement into one of your three categories, you have to make sure that it actually means something and isn’t just signalling.
A difficulty is that it can often be hard to identify why we believe something—either we’ve forgotten how we came to believe it, or we believe it because or the accumulation of lots of small pieces of information that are hard to summarize.
I know facts about Zimbardo’s prison experiment because I studied it in University. I know the feeling a nail makes when I hit it with a hammer because I have done it. I know Greece has been granted a second bailout because I overheard someone talking about reading it in the news.
These are things that I know why I know them. I guess that you would be able to give me reasons why you think the world is round.
It is harder when there are many small pieces of evidence. I hadn’t thought of that. And I agree that my reccomendations are not possible to do all the time.
I would be happy to revise them to only apply when receiving facts you find surprising or you expect the other person to be surprised. That way we only need two, well defined stop signs in our system 1 thinking. Stop sign one is when we hear “I believe …” find which category it fits into. Sign two is when we are surprised by a fact reply with ” fascinating, where did you hear that”.
This should be on main.
This is my first post, I was unable to post on main.
I am also unaware of how I should decide where to post. What makes a main post?
Hmm … quality, really. One of the functions of Discussion is posts that aren’t ready for main. That’s what I meant, that this post is good enough for that.
Do you have enough karma now to move it to main? If so you should. Or maybe a moderator can.
I do not know how to move it. If you think it should can you please ask a moderator.
Seconded.
First occurrence:
Second occurrence:
Was this slight difference intentional (one has you add a note that there is a bridge on his map; the second has you add a note that he has a note that he believes there is a bridge)?
Maybe. I wonder if too often we get out our eraser and immediately start trying to erase on their map.
The difference was very intentional. I wanted to make clear the extra level of indirection between the two phrases. In the second case John may not actually have a bridge on his map at the indicated point, all we know is that he has the note saying that he believes there is a bridge there. It should logically follow that he should only say he believe something to be on his map if is it actually on his map. The point I was trying to make is that sometimes these things do not follow.
Gotcha, and thanks for the clarification. I see the difference, and in the first case you seem to be highlighting the difference between the map and the territory, while in the second case, you are highlighting the difference between the actual map and a belief about what’s on one’s map (in other words, one more “meta level” removed).
Now knowing that this was intentional, my suggestion is that you might have wanted to hold back on that first phrase until you made your clarification about the three types of statement meanings. Then perhaps highlight the full gamut of options:
1) Add a bridge to my own map 2) Add a note to my map that John has a bridge on his map at location … 3) Add a note to my map that John believes he has a bridge on his map at location …
My current read is that the first paragraph cautions us to not do #1, but to do #2 instead… meanwhile you knew that you’d actually be advocating for not doing #2 either because #3 might be the likely case.
Hopefully that makes sense. The caution was fantastic in the early part of the article (don’t add the bridge automatically); I just think the reader might have benefited from seeing your full purpose for the article (the three belief statement meanings) prior to you advocating which note we should add to the map.
Thanks for writing this. I enjoyed the read.
I’d be curious about practical steps concerning (specific clause emphasized):
I’ve encountered dragon in the garage scenarios with others, typically concerning religious belief. It seems clear to me that there’s no actual rent being paid. They profess belief, but there’s really nothing predictive about the beliefs, and any apparent failures already have cached explanations or even experts to defer to for explanation.
How do you (or any others who read this) proceed with things like this? Do you try to keep illustrating that the belief is not really a belief? Just make a note on your map concerning person X having a belief in belief, or just ignore it if there are no practical implications that concern you?
How far do you go to try to “save” others from belief in belief? I’ve been asked what the harm is in believing falsely if there’s no practical harm to be experienced… I’m not sure I have the answer. It feels very wrong to me to think of holding a false belief, but if there’s no negative result, should I care?
I always did that, as far as I can remember, to the extent to which I can do it. The issue is of course that the more tags you store together with the knowledge, the more information you need to remember. I don’t think people confuse things so much by choice, as by necessity—the memory is not very perfect—and a trick that tells you—you must remember more details—doesn’t help if you can’t.
It’s an easiest advice to give to improve rationality: think harder and remember more. And it just doesn’t work except for people who don’t try hard enough already.
For the most part, my response to recommendation 1 is: this won’t be much use. It is hard enough remembering “the bottom line” without also remembering the source or evidence. Most fact-learning occasions call instead for a snap judgment on the quality of the source or evidence, followed by accepting or rejecting or setting some quasi-probabilistic attitude toward the information, and then spending no more mental energy on the evidence. In those cases, the evidence is usually soon forgotten. So, most fact-recalling occasions are not going to reveal my reason for the alleged fact.
Obviously there are exceptions. If the information is controversial and I intend to share it, then I’ll try remembering the source. Or better, bookmarking a link or taking notes.
Please see my response to Viliam and ShardPhoenix.