Belief in Self-Deception
I spoke yesterday of my conversation with a nominally Orthodox Jewish woman who vigorously defended the assertion that she believed in God, while seeming not to actually believe in God at all.
While I was questioning her about the benefits that she thought came from believing in God, I introduced the Litany of Tarski—which is actually an infinite family of litanies, a specific example being:
If the sky is blue
I desire to believe “the sky is blue”
If the sky is not blue
I desire to believe “the sky is not blue”.
“This is not my philosophy,” she said to me.
“I didn’t think it was,” I replied to her. “I’m just asking—assuming that God does not exist, and this is known, then should you still believe in God?”
She hesitated. She seemed to really be trying to think about it, which surprised me.
“So it’s a counterfactual question...” she said slowly.
I thought at the time that she was having difficulty allowing herself to visualize the world where God does not exist, because of her attachment to a God-containing world.
Now, however, I suspect she was having difficulty visualizing a contrast between the way the world would look if God existed or did not exist, because all her thoughts were about her belief in God, but her causal network modelling the world did not contain God as a node. So she could easily answer “How would the world look different if I didn’t believe in God?”, but not “How would the world look different if there was no God?”
She didn’t answer that question, at the time. But she did produce a counterexample to the Litany of Tarski:
She said, “I believe that people are nicer than they really are.”
I tried to explain that if you say, “People are bad,” that means you believe people are bad, and if you say, “I believe people are nice”, that means you believe you believe people are nice. So saying “People are bad and I believe people are nice” means you believe people are bad but you believe you believe people are nice.
I quoted to her:
“If there were a verb meaning ‘to believe falsely’, it would not have any
significant first person, present indicative.”
—Ludwig Wittgenstein
She said, smiling, “Yes, I believe people are nicer than, in fact, they are. I just thought I should put it that way for you.”
“I reckon Granny ought to have a good look at you, Walter,” said Nanny. “I reckon
your mind’s all tangled up like a ball of string what’s been dropped.”
—Terry Pratchett, Maskerade
And I can type out the words, “Well, I guess she didn’t believe that her reasoning ought to be consistent under reflection,” but I’m still having trouble coming to grips with it.
I can see the pattern in the words coming out of her lips, but I can’t understand the mind behind on an empathic level. I can imagine myself into the shoes of baby-eating aliens and the Lady 3rd Kiritsugu, but I cannot imagine what it is like to be her. Or maybe I just don’t want to?
This is why intelligent people only have a certain amount of time (measured in subjective time spent thinking about religion) to become atheists. After a certain point, if you’re smart, have spent time thinking about and defending your religion, and still haven’t escaped the grip of Dark Side Epistemology, the inside of your mind ends up as an Escher painting.
(One of the other few moments that gave her pause—I mention this, in case you have occasion to use it—is when she was talking about how it’s good to believe that someone cares whether you do right or wrong—not, of course, talking about how there actually is a God who cares whether you do right or wrong, this proposition is not part of her religion—
And I said, “But I care whether you do right or wrong. So what you’re saying is that this isn’t enough, and you also need to believe in something above humanity that cares whether you do right or wrong.” So that stopped her, for a bit, because of course she’d never thought of it in those terms before. Just a standard application of the nonstandard toolbox.)
Later on, at one point, I was asking her if it would be good to do anything differently if there definitely was no God, and this time, she answered, “No.”
“So,” I said incredulously, “if God exists or doesn’t exist, that has absolutely no effect on how it would be good for people to think or act? I think even a rabbi would look a little askance at that.”
Her religion seems to now consist entirely of the worship of worship. As the true believers of older times might have believed that an all-seeing father would save them, she now believes that belief in God will save her.
After she said “I believe people are nicer than they are,” I asked, “So, are you consistently surprised when people undershoot your expectations?” There was a long silence, and then, slowly: “Well… am I surprised when people… undershoot my expectations?”
I didn’t understand this pause at the time. I’d intended it to suggest that if she was constantly disappointed by reality, then this was a downside of believing falsely. But she seemed, instead, to be taken aback at the implications of not being surprised.
I now realize that the whole essence of her philosophy was her belief that she had deceived herself, and the possibility that her estimates of other people were actually accurate, threatened the Dark Side Epistemology that she had built around beliefs such as “I benefit from believing people are nicer than they actually are.”
She has taken the old idol off its throne, and replaced it with an explicit worship of the Dark Side Epistemology that was once invented to defend the idol; she worships her own attempt at self-deception. The attempt failed, but she is honestly unaware of this.
And so humanity’s token guardians of sanity (motto: “pooping your deranged little party since Epicurus”) must now fight the active worship of self-deception—the worship of the supposed benefits of faith, in place of God.
This actually explains a fact about myself that I didn’t really understand earlier—the reason why I’m annoyed when people talk as if self-deception is easy, and why I write entire blog posts arguing that making a deliberate choice to believe the sky is green, is harder to get away with than people seem to think.
It’s because—while you can’t just choose to believe the sky is green—if you don’t realize this fact, then you actually can fool yourself into believing that you’ve successfully deceived yourself.
And since you then sincerely expect to receive the benefits that you think come from self-deception, you get the same sort of placebo benefit that would actually come from a successful self-deception.
So by going around explaining how hard self-deception is, I’m actually taking direct aim at the placebo benefits that people get from believing that they’ve deceived themselves, and targeting the new sort of religion that worships only the worship of God.
Will this battle, I wonder, generate a new list of reasons why, not belief, but belief in belief, is itself a good thing? Why people derive great benefits from worshipping their worship? Will we have to do this over again with belief in belief in belief and worship of worship of worship? Or will intelligent theists finally just give up on that line of argument?
I wish I could believe that no one could possibly believe in belief in belief in belief, but the Zombie World argument in philosophy has gotten even more tangled than this and its proponents still haven’t abandoned it.
I await the eager defenses of belief in belief in the comments, but I wonder if anyone would care to jump ahead of the game and defend belief in belief in belief? Might as well go ahead and get it over with.
- Cached Selves by 22 Mar 2009 19:34 UTC; 214 points) (
- The Useful Idea of Truth by 2 Oct 2012 18:16 UTC; 190 points) (
- Incremental Progress and the Valley by 4 Apr 2009 16:42 UTC; 97 points) (
- Moore’s Paradox by 8 Mar 2009 2:27 UTC; 89 points) (
- Striving to Accept by 9 Mar 2009 23:29 UTC; 48 points) (
- Don’t Believe You’ll Self-Deceive by 9 Mar 2009 8:03 UTC; 43 points) (
- Should we use qualifiers in speech? by 23 Oct 2020 4:46 UTC; 35 points) (
- Modelers and Indexers by 12 May 2020 12:01 UTC; 34 points) (EA Forum;
- What’s your big idea? by 18 Oct 2019 15:47 UTC; 30 points) (
- Rationality Reading Group: Part H: Against Doublethink by 27 Aug 2015 1:22 UTC; 13 points) (
- 18 Feb 2014 16:10 UTC; 9 points) 's comment on Identity and Death by (
- [SEQ RERUN] Belief in Self-Deception by 16 Mar 2013 5:58 UTC; 8 points) (
- 2 Feb 2011 11:19 UTC; 8 points) 's comment on Is Atheism a failure to distinguish Near and Far? by (
- 25 Jan 2012 20:00 UTC; 6 points) 's comment on Urges vs. Goals: The analogy to anticipation and belief by (
- 12 Jun 2011 16:49 UTC; 4 points) 's comment on Welcome to Less Wrong! by (
- 19 Mar 2011 18:14 UTC; 4 points) 's comment on Omega and self-fulfilling prophecies by (
- 19 Mar 2009 8:38 UTC; 3 points) 's comment on Never Leave Your Room by (
- Posting now enabled on Less Wrong by 5 Mar 2009 16:15 UTC; 2 points) (
- 7 Nov 2010 2:42 UTC; 2 points) 's comment on Rationality quotes: October 2010 by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- 28 Aug 2011 6:26 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 5 Oct 2011 21:05 UTC; 0 points) 's comment on On self-deception by (
- 7 Nov 2010 5:18 UTC; 0 points) 's comment on Rationality quotes: October 2010 by (
- 27 May 2009 10:11 UTC; -1 points) 's comment on Dissenting Views by (
I don’t know how well you know this person, so my advice may be unnecessary. But your post gives me the impression that you need to be much more careful about speculating on how her mind works. I think that it’s a red flag when you write first that
. . . and then proceed to make apparently confident declarations about how her mind works, such as
As you yourself have observed, we largely understand other people by taking a portion of our own black-box mind, plugging in a few explicit settings (such as beliefs or experiences), letting the model run for a bit, and seeing what pops out. In particular, to understand how another person makes judgments, we collect their evinced beliefs, try to twiddle some dials until our model expresses the same beliefs, and then let it run for a bit. We then try to peer into the model as best we can, getting as good a picture of its inner workings as introspection allows us. We then take this picture as our hypothesis about how the other person thinks.
But the first quote above is strong evidence that your mind works differently from hers in some highly relevant respects. Therefore, you should be highly skeptical that what is going on in her mind resembles what it took to make the model of her in your own mind match her utterances. But you give me the impression that you haven’t been sufficiently skeptical of the match between her mind and your model of it. I think that this has led you astray on several points.
For example, based on what you’ve written, I don’t think that you’re using the right model to understand what was going on in her mind when she said, “I believe that people are nicer than they really are.” You were led to this confusion because she was not using the word “believe” in the way that you, and your model of her, do. You are using “belief” to mean a feature of a model of how the world is. But that, I expect, is not what she meant. Thus, your remarks here --
-- were irrelevant because they do not apply to the sense of the word “believe” that she was using.
For what it’s worth, in my model of her, when she said “I believe that people are nicer than they really are,” she meant, “When I reflect on my emotional attitude towards people, I see that this attitude is of the sort that, in the absence of its actual cause, could have been caused by a falsely high belief (in your sense) about peoples’ niceness.”
The actual cause for her emotional attitude is perhaps her “religion”. Or perhaps it is something else. Perhaps she has no idea what the actual cause is, or perhaps she thinks she does, but she doesn’t really. But none of this implies that she was attributing to herself the belief that people are nicer than she actually believes them to be (where, here, I’m using “belief” in your sense.)
Her utterance seems analogous to someone who walks out of an optometrist’s office after having his pupils dilated and says, “Because of those drops the optometrist gave me, I believe the sun is brighter than it really is.” If we heard this, we shouldn’t conclude that he believes something contradictory, or that he has incorrect beliefs about his beliefs. His word “belief” in this case probably does not mean “best guess about how things really are.” Rather, it’s a clumsy way to say that some qualities of his experience of the world are as if he had a certain belief (in the sense normally understood). He does not mean to imply that he has any wrong beliefs (in the conventional sense). It would be a mistake to say that his subjective experience of the light is in any way erroneous. After all, it accurately reflects the fact that he had those drops put in his eyes.
Similarly, your interlocutor’s statement that she “believes” that people are nicer than they really are referred to a particular quality of her emotional attitude towards them, not to a belief (in your sense) about how they are. In particular, it didn’t imply any expectation about how they would behave. That, I expect, is why she was initially taken aback when you asked, “So, are you consistently surprised when people undershoot your expectations?” The problem wasn’t, as you appear to think, that she had prevented her own mind from drawing obvious conclusions. The problem was that you (because of her confusing wording) were speaking of her so-called “belief” as though it were a belief in the normal sense, something that should lead to certain expectations about other peoples’ actions. But I expect that it wasn’t any such thing, notwithstanding her unfortunate choice of words.
An interesting hypothesis, Tyrrell; but she explicitly explained to me about how, if you think people are nicer than they really are, then this makes you happier.
You’re right to call it a mere hypothesis. I hope that I made its tentative nature clear.
But that explanation of hers seems to me to be consistent with my hypothesis. No surprise, because it was part of the data that I was trying to fit when I constructed it.
I would be curious to know more about how she responded when you asked her, “So, are you consistently surprised when people undershoot your expectations?” Did she have anything more to say after repeating the question?
My hypothesis is that she simply meant, “It makes me happy to pretend that people are nicer than they really are.”
I think the first time Eliezer said he couldn’t get into her mind was that he couldn’t understand the psychological state she needed to be in to make that statement. The second time—where he was writing about what she believed—he was discussing her apparent epistemological state.
There are significant differences between the two for observers. I can almost never understand someone else’s psychological state, but I can often figure out what they are talking about and how they got there epistemologically—that is, what could have caused their stated beliefs.
When I read “i believe people are nicer than they really are” I got the impression her meaning was along the lines of “people are nicer inside than their actions. On reflection, this might be because that’s what I believe. It ties in to fundamental attribution error. Peoples actions are based so much on environment and circumstance that if you had a way to truly look into a person I think you’d see a better person than you would have guessed if you only looked at their actions. Most people don’t see themselves as evil. They do things we see as evil but in their heads they are doing what they think is good.
Id be interested in hearing what exactly she said that brought on your analysis Eliezer. I realise it was a long time ago, and im not likely to get a reply anyway, but it seems likely to me her statemen came from an intuitive belief in fundamental attribution error. I know I held that belief long before I encountered it first in HPMoR, so its possible for her too.
I think that there’s a better chance that he’ll see your comment if you reply directly to the post rather than to another comment. At least, I think that that’s how it works.
I’ll go one step further and defend belief in belief, infinitely regressed. ;-) As you point out, the placebo effect here is simply the expectation of a positive result—and it applies equally at any level of recursion here.
Humans only need a convincing argument for predicting a positive result, not a rational proof of that prediction! Once the positive result is expected, we get positive emotions activated every time we think of anything linked to that result, leading to self-fulfilling prophecies on every level.
This being the case, one might question whether it’s rational to disbelieve in belief, if you have nothing equally beneficial to replace it with.
When it comes to external results, sure, it makes sense to have greater prediction accuracy. But for interior events—like confidence, creativity, self-esteem, etc. -- biasing one’s predictions positively is a significant advantage, as it stabilizes what would otherwise be an unstable system of runaway feedback loops.
People whose systems are negatively biased, on the other hand, can get seriously stuck. They basically hit one little setback and become paralyzed because of runaway negative self-fulfilling prophecy.
(I’ve been such a person myself, and I’ve worked with/on many of them. Indeed, it was noticing that other, far less “rational” and “intelligent” individuals were much more confident, calm, and successful than I was, that led me to start seriously investigating the nature of mind and beliefs in the first place, and to begin noting the distinctions between people I dubbed “naturally successful” and those I considered “naturally struggling”.)
My boyfriend was once feeling a bit tired and unmotivated for a few months (probably mild depression), and he also wanted to stop eating dairy for ethical reasons. He felt that his illness was partly mentally generated. He decided that he was allergic to dairy, and that dairy was causing his illness. Then he stopped eating dairy and felt better!
He told me all this, and also told me that he usually believes he is actually allergic to dairy, and it is hard to remember that he is not. When someone asks how he knows he is allergic to dairy, he says something plausible and false (“The doctor ran blood tests”) and believes it if he doesn’t stop and think too much.
He believes he is not allergic to dairy, but he believes he believes he is allergic to dairy? Belief-in-belief. But he recognizes this and explained it to me—so that’s a belief-in-belief-in-belief? But it helped him get over his mental illness and stop eating dairy… that’s winning.
In general I would say a belief-in-belief is useful if you decide some behaviors are desirable, but some false model of the world better motivates you to behave properly. Belief-in-belief-in-belief is useful if you know too much to think both “Z is true” and “I believe not-Z”. Then you tell yourself you have a belief-in-belief.
Disclaimer: This is weird to me and I don’t really understand how he pulls it off.
If I had been talking to the person you were talking to, I might have said something like this:
Why are you deceiving yourself into believing Orthodox Judaism as opposed to something else? If you, in fact, are deriving a benefit from deceiving yourself, while at the same time being aware that you are deceiving yourself, then why haven’t you optimized your deceptions into something other than an off-the-shelf religion by now? Have you ever really asked yourself the question: “What is the set of things that I would derive the most benefit from falsely believing?” Now if you really think you can make your life better by deceiving yourself, and you haven’t really thought carefully about what the exact set of things about which you would be better off deceiving yourself is, then it would seem unlikely that you’ve actually got the optimal set of self-deceptions in your brain. In particular, this means that it’s probably a bad idea to deceive yourself into thinking that your present set of self deceptions is optimal, so please don’t do that.
OK, now do you agree that finding the optimal set of self deceptions is a good idea? OK, good, but I have to give you one very important warning. If you actually want to have the optimal set of self deceptions, you’d better not deceive yourself at all while you are constructing this set of self deceptions, or you’ll probably get it wrong, because if, for example, you are currently sub-optimally deceiving yourself into believing that it is good to believe X, then you may end up deceiving yourself into actually believing X, even if that’s a bad idea. So don’t self deceive while you’re trying to figure out what to deceive yourself of.
Therefore, to the extent that you are in control of your self deceptions, (which you do seem to be) the first step toward getting the best set of self deceptions is to disable them all and begin a process of sincere inquiry as to what beliefs it is a good idea to have.
And hopefully, at the end of the process of sincere inquiry, they discover the best set of self deceptions happens to be empty. And if they don’t, if they actually thought it through with the highest epistemic standards, and even considered epistemic arguments such as honesty being one’s last defence, slashed tires, and all that.… Well, I’d be pretty surprised, but if I were actually shown that argument, and it actually did conform to the highest epistemic standards.… Maybe, provided it’s more likely that the argument was actually that good, as opposed to my just being deceived, I’d even concede.
Disclaimer: I don’t actually expect this to work with high confidence, because this sort of person might not actually be able to do a sincere inquiry. Regardless, if this sort of thought got stuck in their head, it could at least increase their cognitive dissonance, which might be a step on the road to recovery.
“Disclaimer: I don’t actually expect this to work with high confidence, because this sort of person might not actually be able to do a sincere inquiry.”
well exactly… If the person were thinking rationally enough to contemplate that argument, they really wouldn’t need it.
I have never successfully converted a religious person to atheism, but my ex-girlfriend did. I am a more rational person than her, I know more philosophy, I have earnestly tried many times, she just did this once, etc. How did she do it? The person in question was male and his religion forbade him from sex outside marriage. Most people are mostly ruled by their emotions.
My working model of this person was that the person has rehearsed emotional and argumentative defenses to protect their belief, or belief in belief, and that the person had the ability to be reasonably rational in other domains where they weren’t trying to be irrational. It therefore seemed to me that one strategy (while still dicey) to attempt to unconvince such a person would be to come up with an argument which is both:
Solid (Fooling/manipulating them into thinking the truth is bad cognitive citizenship, and won’t work anyway because their defenses will find the weakness in the argument.)
Not the same shape as the argument their defenses are expecting.
Roko: How is your working model of the person different from mine?
My working model of a religious person such as the above is that they assess any argument first and foremost on the basis “will accepting this argument cause me to have to abandon my religious belief?”. If yes, execute “search for least implausible counterargument”.
As such, no rational argument whose conclusion obviously leads to the abandonment of religion will work. However, rational arguments that can be accepted on the spot without obviously threatening religion, and which lead via hard-to-predict emotional channels to the weakening and defeat of that belief might work. It is my suspicion that persuading someone to change their mind on a really important issue almost always works like this.
“she just did this once, etc. How did she do it? ”
By appealing to a non-rational or irrational argument that would lead the person to adopt rationality.
Arguing rationally with a person who isn’t rational that they should take up the process is a waste of time. If it would work, it wouldn’t be necessary. It’s easy to say what course should be taken with a rational person, because rational thought is all alike. Irrational thought patterns can be nearly anything, so there’s no way to specify an argument that will convince everyone. You’d need to construct an argument that each person is specifically vulnerable to.
The problem is that you often don’t know until you actually start arguing with them that they are irrational or just confused and misled.
George H Smith has a pretty good essay about arguing with people to convert them to rationality, ” Atheism and the Virtue of Reasonableness”. For example, he advocates the “Presumption of Rationality”—you should always presume your adversary is rational until he demostrates otherwise. I don’t know if the essay is on-line or not, I read it as the second chapter of “Atheism, Ayn Rand, and Other Heresies.”
Irrational thought patterns can be nearly anything, but of course they strongly tend to form around standard human cognitive biases. This saves a great deal of time.
“Most people are mostly ruled by their emotions.”
To be more specific, most men, for a considerable portion of their lives, are mostly ruled by their sex drives.
To be clear, she never did say, “I am deceiving myself” or “I falsely believe that there is a God”.
I stand corrected. I hereby strike the first two sentences.
I would expect a reply along the lines of: It’s precisely because I can’t trust my own reasoning when deciding which false beliefs I should have that I accept these which are handed down. I pick Judaism because it’s the oldest and thus has shown through memetic competition that it’s the strongest set of false beliefs one could have.
Or …” I pick Christianity because it’s the most popular and has therefore proven itself memetically competitive.”
I have a lot of friends who think “it’s old therefore it must be good to have survived this long” about Tarot and eastern religions etc.
Personally I’d wanna eliminate the false beliefs even if it cost me my mojo, but that’s a different set of priorities I guess.
In fact, the argument from tradition is considered very strong in alternative medicine in particular and New Age culture in general, even if whatever it is was actually made up^W^Wrediscovered last week.
Consistent consciously intended self-deception may be hard. But our minds are designed to produce self-deceptions all the time without us noticing. Just don’t look behind the curtain and “let it be”, “go with the flow” etc. and you can be as self-deceived as most folks.
“I believe that people are nicer than they really are.” That part made me ponder. Because, actually, it’s something I believe, too. So I froze for a while, and looked at that belief. Do I have Escher loops in my belief networks ? Well, maybe, I’m far from being a perfect bayesian, but I can’t allow myself to stop here.
My first justification for that thought was : I don’t refer to the same thing in the two parts of the sentence. A bit like “sound” can refer to acoustic vibrations, or to a perception, and if you switch from one to the other into the same sentence, you can make a sentence that seems self-contradicting but is still valid.
People is a vast group. Nice or not is a characteristic of a person. So, to attribute “niceness” to people in general, you’ve to make an aggregate value. There are many ways to make an aggregate value, for example, mean and median. So that sentence could mean something like “I believe the median people to be nicer than the average people” (implying a minority of very un-nice people who drive the average backward, but don’t change the median).
But then I thought “Hey, stop. You’re trying to find excuses here. That’s not really what you meant with that sentence, or you would have said it clearly. Don’t find yourself excuses, just face the fact you were doing knots with your believes.”
So I tried to dig where this knot could came from. And I think I found it, and it’s linked to the first excuse, but not as simple. The problem comes from the fact that I use different algorithms to evaluate the niceness of a single person I’m interacting with (be it a friend/family, or just a passerby asking me “what time is it ?”) and to evaluate the overall nicest of “people” in general.
When I evaluate niceness of people in general, I think about the horrors of history, about the Milgram experiment, about the crimes the news love to report about, the scary statistics about the number of husbands who hit their wives, … And also about the “heroes”, those who did risk their lives to hide unknown Jews during WW2, those who did run in a house of fire to save their neighbor. That gives me a mitigated image of people in general, neither very nice nor very un-nice, able of the best and of the worst.
When I evaluate niceness in one single person interacting with me, I tend more to recall my own interactions with individual persons. And in those, I had a few bad memories (like being assaulted to steal my money and cell phone once) but mostly good memories, be it by luck or by selective memory, most of the interactions with others I can remember were mostly positive. So when I interact with a new individual person, I assume there is a huge chance of that person being “nice”, even if I have a more mitigated view of humans in general.
That probably comes from deeper, evolutionnary psychology reasons : the individual your interact with are your tribe, they are friendly. People in general are other tribes, not so friendly. But I’m not well versed enough into evolutionnary psychology to go further on that line. But anyway that’s, I think, where the contradiction comes from. It may be partly justified by the fact that the median is higher the average, if it really is (which I’ve no factual evidence of, only a vague feeling). But it mostly comes from just using to different algorithms, which should, in a well-calibrated brain, lead to the same result, but which for many reasons (all the biases, imperfect knowledge, …) just don’t.
But if you’d actually meant this you’d have just said “The median people are nicer than the average people”. Saying “I believe the median people to be nicer than the average people” would indicate that you didn’t believe it but did believe you believed it.
I don’t quite agree there. Saying “I believe the median people to be nicer than the average people” indicates that you believe that you believe it but it doesn’t indicate that you don’t actually believe it. You could say it is neutral with respect to whether or not you actually believe it but not that it indicates outright that you don’t.
Indeed, but it does hint that you don’t actually believe it, otherwise you would have said the simpler thing.
I disagree. In general, saying “I believe x” is evidence that you believe x, and therefore cannot be evidence that you do not believe x. I would be interested to see evidence that people usually use “I believe x” in such a way that it can be taken as evidence that one does not believe x.
I believe that people usually use “I believe x” instead of “x” in cases where they want to stress the possibility, however small, that they are wrong. Usual caveats for religious and “I believe in” statements, as well as unrelated senses of ‘believe’, apply.
Yes, that distinction definitely applies to me. Usually when I say “X” it means “I believe X with almost certainty” and when I say “I believe X” indicates that there is some doubt still, maybe a 90% confidence, but not a 99% confidence.
But in that specific case, as Misha said, I didn’t need to actually believe it—it was a belief in belief in my chain of thoughts, an attempt to rationalize the initial mistake, that appeared, with further analysis, to not be the real cause of it. Having this as a real belief or not wouldn’t change the reasoning.
And this is, in fact, part of kilobug’s point.
(while we’re on the subject, the plural of belief is “beliefs”, contrary to all reason)
“I wish I could believe that no one could possibly believe in belief in belief in belief...”
You wish you could believe Eliezer? Is this a dliberate stroke of irony or a subconcious hint at the fact that you do have an empathic understanding of the thought processes behind tailoring your own beliefs?
I think the idea behind this is that he wishes reality played out in such a way that, to a rational observer, it would engender belief. It’s a roundabout way of saying “I wish reality were such that...”
Hrm… While on the one hand I can look at her position and basically react with a “your mind is entirely alien to me”, on the other hand, I can actually imagine being in that state.
That does NOT mean, of course, that it is a reasonable state to be in, but it does seem to be the sort of state that my mind can support.
I guess the basic key is that human minds aren’t necessarally naturally consistent. So we can end up in actual inconsistent states. Including states a bit confused about consistency itself.
A bit more of a personal example would be a state I sometimes recall having been in in the past, and have certainly seen in others, would be when one might say something like, oh, I dunno, “and scientifically, the universe is about 13.7 billion years old and earth is about 4.5 billion years old” and of course, the world was created about 6000 years ago.”
As near as I can tell, happens is that we almost imagine the “scientific world” and the “religious world” as parallel universes that… are actually the same one, so mentally we keep track of it by keeping track of different things.
The way this works is someone might manage to end up in a state that they completely fail to really face the question of “okay, but if you rewind time a bit, will you see 6000 years ago the universe poofing into existence, or can you go farther back, etc? ie, what ACTUALLY happened in ACTUAL REALITY?”
Then, when facing that question, all sorts of Escher mentality stuff starts forming as a defense. But what I think initially happens, at least in part, is sort of mentally tracking those as being about different subjects, rather than contradictory statements about the same thing. So that one will end up, with “science glasses”, visualizing prehistoric humans doing stuff tens of thousands of years ago, while etc etc etc...
At least, that’s my own, partly introspective model of what’s going on here, of how people can end up in these states.
I think that people who had actual mental models of the world would notice a contradiction that large.
People who profess two different beliefs may not see a contradiction. It’s just good to profess one, and also good to profess the other, for different reasons. They aren’t visualizing a world that, at one time or another, needs to either poof or go on. They’re visualizing that “science” and “religion” both seem like good groups to join.
I think that may be part of it, but I’m also thinking back a bit to when I was more religious, and so on, and also thinking about how some people I know seem to talk, and as near as I can tell, there really does seem to be a bit of that.
I’m claiming they’re visualizing a world that goes “poof, ‘LET THERE BE LIGHT!’”. AND visualizing a world that goes farther back, and somehow doing some form of funny doublethink them thinking of those as different worlds that are both in some sense true, while some aspect of them is treating those not as contradictory models, but almost as, well, different worlds. ie, two different “truths” (“but what is truth?” :))
That is, simply holding the contradiction in place, having two “models”, not along the lines of two competing models, but that (though they don’t actually notice it), they’re imagining it more as parallel worlds that, depending on circumstances, they’ll consider either one or the other “this world”
They would (usually, see somewhat below) not ever actually say, or even notice that they’re thinking that way. In other words, I’d expect if you asked such a person something like “do you believe in a set of parallel realities, one in which the world was spoken into existance ~6000 years ago, and another about 13.7 billion years old or at least certainly older than 6000 years”, they’ll probably give you funny looks. But I think, without them noticing, something like that is going on in how it’s being stored.
And I can speak from personal experience about some of the REALLY weird stuff I used to think in terms of, so it’s in part a “pay no attention to the contradiction behind the curtain” situation.
Heck, sometimes when I bring various contradictions up, I’ll get responses like “this isn’t a debate class” or “this isn’t a court room and you’re not a lawyer”, and basically have it laughed off like that from some family members. (and, of course, the infamous “in your opinion” fully general retort to any position you don’t like. :))
I’m not saying this is all of it, but it sure seems to me that something like this is going on in some cases. It may also be what underlies stuff like “I believe people are nicer than they are”. That is, statements like that may partly cash out to “I have a couple different models of people, one of which says they’re nicer than the other. I hold both of these at the same time, but I call one my belief, and one the actual situation”
At least, when I try to imagine being in a mental state that could provoke me to utter such a statement, ie, when I try to simulate that state on myself, that seems to be what the result “looks like.”
Oh, that bit from earlier, well… sometimes it’s made a bit explicit.
I’ve come across some bits of occult philosophy that basically talks about how there can be many histories that are “true” (no, not in the sense a physicist might talk about interference), and they’ll explicitly say stuff like the “there’s the actual historical history, but that’s not the only ‘true’ one..”
But also just from introspection, well, it does feel to me that in the past I would be in such a state, have multiple models that I wasn’t so much treating as competing so much as treating as, well, simply true, in different senses.
The Escher mental tangle can get REALLY strange. :)
From what I’ve seen, fundamentalist Christians (this is the only group I’ve had a chance to speak to) often see the contradiction, and are PROUD of their ability to believe on ‘pure faith’ despite it. As if it’s some kind of accomplishment to say ‘wow, god is so powerful that he can even overcome THAT’. I don’t know how far they carry through in creating mental models of the world, but I know that their expectations of a world with God in it are VERY different from a world without god, i.e. the node is included in their models. This is a particular religious group where receiving “prophetic words” and visions is common, and the people I knew based their expectations on what “God” said to them in these visions. And were sometimes sorely disappointed, but their ‘faith’ never seemed to be affected. At the start, they seemed as alien to me as the woman you’re describing seemed to you. After befriending some people in this group, I started to understand the geometry of their minds a little bit more. This was nearly a year ago, though, so I have trouble explaining the insights I’ve had because my mind has gone back to ‘how could anybody be that STUPID?’
I am so much a one-level person that my sense of social insincerity has atrophied.
Rational straight man syndrome. So much a truth-finder you forget how to not speak the truth.
This seems to be associated with higher than average testosteron levels. If you inject testosterone to a random man he will very prone to not lie and be overly straightforward.
Sources?
Maybe I should get a blood test.
Here: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0046774
I suffer the same symptom. (and have an excessive amount of body hair, not that that’s more than negligibly indicative of high testosterone levels)
What’s the cheapest/easiest way to get tested? (more out of curiosity than anything else)
If I understand it correctly, you can go to your physician and ask for it. The test itself is quick, requires a blood sample, and I don’t think it is very expensive.
“And so humanity’s token guardians of sanity (motto: “pooping your deranged little party since Epicurus”) must now fight the active worship of self-deception—the worship of the supposed benefits of faith, in place of God.”
As I keep saying, helping people to overcome biases (such as the above) is a lot easier if there are psychologically viable places for people to jump to once they’ve overcome their bias.
You should have spent much more of your time in this debate convincing your tangled friend that, if she were to abandon her religious belief (or belief in belief, or whatever), she would still be able to feel good about herself and good about life; that life would still be a happy meaningful place to be.
Maybe she has a massive internal guilt complex and thinks of herself as a bad person, and she thinks that only religion can help her with this. Maybe she is frightened that atheism will lead to nihilism.
“You should have spent much more of your time in this debate convincing your tangled friend that, if she were to abandon her religious belief (or belief in belief, or whatever), she would still be able to feel good about herself and good about life; that life would still be a happy meaningful place to be.”
I don’t think Eliezer cared so much to correct someone’s one wrong belief as much as he cared to correct the core that makes many such beliefs persist. Would he really have helped her if all his rational arguments failed, but his emotional one succeeded? My guess is that it wouldn’t be a win for him or her.
Well that depends on whether your aim is to make people have correct beliefs, or whether you want to make people have correct beliefs by following the ritual of rational argument… and I think that EY would claim to be aiming for the former.
What use is it to have correct beliefs if you don’t know they’re correct?
If the belief cannot be conveniently tested empirically, or it would be useless to do so, the only way we can know that our belief is correct is by being confident of the methodology through which we reached it.
When I’m fleeing through an ancient temple with my trusty whip at my side, and I come to a fork in the road, I’ll take the path I belief leads to safety. This will turn out to be a wise choice, because the other one would lead me to a pit full of snakes, falling boulders and almost certainly walls that slowly but surely move closer and closer. That’s the sound of inevitability.
I naturally prefer to having enough evidence to be confident in my beliefs. Given time I would definitely look up the trusy map I was given of the doom riddled temple. I’d also get someone else to go through ahead of me just to make sure. However, my beliefs will inevitably determine what decisions I make.
To be honest I am a little confused about what that question means. It makes no sense to me, although I can see that someone would conceivably be able to wrangle their mind into that incoherent state. If they believe, but apparently don’t know that they believe then I assume that all their decisions are made in accordance with that belief but that they will describe their belief as though they are not confident in it.
“I naturally prefer to have a high level of confidence in my beliefs.”
Doesn’t that depend on how reliable those beliefs are?
If you’re fleeing through the temple pursued by a boulder, you don’t want to dither at an intersection, so whichever direction you think you should go at one moment should be constant. But there’s no reason why your confidence should be high to avoid dithering; you need merely be stable.
“’ll take the path I belief leads to safety. This will turn out to be a wise choice”
If, and only if, your belief is correct. If your belief is wrong your choice is a disastrous one. Rationality isn’t about being right or choosing the best course, it’s about knowing that you’re right and knowing which is the best course to choose.
Thanks Annoyance, I replaced ‘have a high level of confidence’ with ‘having enough evidence to be confident’. That makes my intended meaning clearer.
Then I think I agree with you, mostly. If time or a similar limited resource makes rigorous justification too expensive, we shouldn’t require it. But whatever we do accept should be minimally justified, even if it’s just “I have no idea where to go so I’ll pick at random”.
I wouldn’t look at the map if I were running from the boulder. But I would have looked at it before entering the temple, and you can bet I’d be trying very hard to retrace my steps on the way out, unless I thought I could identify a shortcut. Even then I might not take the gamble.
As a theist, I don’t believe in God because I perceive some positive benefit from that belief. My experiences and perceptions point to the existence of God. Of course those experiences and perceptions may be inaccurate and are subject to my own interpretations, so I can’t claim that my beliefs are rational. I accept on an intellectual level that my belief could be wrong. This doesn’t seem to enable me to stop believing.
However, I am involved in a religious community because there are positive benefits—chiefly that of being able to compare notes with other people who share my irrational belief in God and my desire to do good work in the world. I can see that there might be positive benefits in religious communities for non-theists, though I don’t really see the point.
I know several non-theists, including atheists, who belong to religious communities because they value the benefits that such belonging provides. It helps, of course, that they belong to the kinds of religious communities that welcome people like them.
There are also plenty of non-religious communities that one can belong to. These also provide the “benefits of belonging” without having to be the odd one out (ie the person that doesn’t actually follow the one major point of the community itself). Therefore I agree with artsyhonker in not seeing the point. I’d only consider it the rational move if there were no such other communities nearby (or none that were attractive).
Sure. That makes sense, and if it weren’t for my actual experience with people who do seem to get benefits from that group membership that they consider worthwhile, despite also being members of other communities, I would agree with this wholeheartedly.
Of course, it’s certainly possible that they’re all merely confused and not actually getting benefits they value, or that they could be getting all the same benefits from their other groups and somehow don’t realize it.
Ah—no—you miss what I was trying to say. They definitely get benefits—not at all confused. I’ll try and give an example to explain what I mean—and I’ll leave religion out of it for the moment.
Lets say that near to me is the local football club, and the local wildlife-walks group. both of them have a thriving community and are welcoming and interesting people. Thus if I join either one I will be assured of the benefits of belonging to a community.
But lets say that I happen to have absolutely no passion for football, but really enjoy wildlife walks.
So—the rational move for me would be to join the wildlife group, in favour over the football club. not because there are no benefits to the football club, but because I would get even more out of being in a group where I share the passions and interests with the majority of members.
This is kinda what I was driving at. There’s nothing wrong with an atheist joining a local christian group to gain the benefits of community… but if there’s another local group that has the same sense of community—but founded around a principle that the atheist actually shares… then they’ll probably get even more out of it.
If, in that situation, I observed you evaluating both groups and choosing to join the football club, that observation would increase my confidence that you are obtaining something of value from the football club that you aren’t getting elsewhere, even if I have no clue what that might be.
Yup, no argument here. I would be curious to know what it was.
(nods) Me too. The impression I’ve gotten from conversations with my non-theist friends who belong to religious communities is that they provide a more close-knit and mutually committed community than their secular equivalents. This is especially relevant for those with children.
Yes, I’ve found that most (but not all) hobby-based communities tend to be fairly loosely constructed. People are expected to hang around for a few years, perhaps, but not really to contribute more than just some passing time.
Exceptions I’ve found to this rule are: ethnic/expat groups, parenting support groups, and (strangely) some geeky groups: SF/F (in certain cities), and the SCA.
The latter was my biggest surprise, when I joined. There a third-generation SCAdians… some of whom have a fourth generation on the way.
SCA?
The Society for Creative Anachronism
AKA an excuse to have fun dressing up and feasting the night away after a day of hand-to-hand fighting (if that’s your wont)… along with a zillion other interesting things to learn and do, with the only caveat being a well-meaning attempt at remaining within the time period of “fall of the roman empire up to and including the early renaissance” (oh, and don’t take “renn faire” as a good example… in the SCA everybody is a participant, not a spectator).
The hand to hand combat is tempting.
Yep—it brings in most of the (male) converts… whereas the feasting/dancing/singing/cooking is what usually tempts in us womenfolk… this means that it’s not only appealing to the geeky types… but actually has an amazingly good gender balance. It also means that you can bring your SO and they will actually have something to do. This is a benefit of community-building not to be overlooked. :)
So, it’s kind of like anime conventions and cosplay then.
Obviously we need to work out how to integrate costumes or cooking into LessWrong meetups...
Nutrition?
:)
Obviously the costumes need integrated paperclips…
http://en.wikipedia.org/wiki/Society_for_Creative_Anachronism
I was one of those people for a while. I was accepted, I think, because the particular group I hung out with had an overwhelming need to convert people, and couldn’t resist a juicy atheist/agnostic specimen like me.
I also sing in a church choir, which is kind of similar except that it’s explicit I’m there for the musical education and not the religion.
Ah, that’s unfortunate.
As far as I can tell, the religious communities my atheist/agnostic church-going friends belong to consider them full-fledged members of the community no more in need of alteration than anybody else, which seems like a much more honest arrangement.
Though, of course, I have no way of knowing for sure.
Of course it doesn’t. To accept that your belief can be wrong isn’t the same as accepting that it is wrong. The former is a complete triviality (if person doesn’t accept that his particular belief can be wrong, even in principle, either the belief is not a real belief, or the person is seriously irrational). The latter not only may enable you stop believing, but should force you to do so.
As is true for any experiences of any person, and still, a lot of people strive to have rational beliefs. While your formulation seems to imply that you happily accept being irrational. Which leads me to ask why? Is it because you think that rationality (however you define it) isn’t always the best way to arrive to true beliefs? Or because you don’t always mind having false beliefs? Or some other reason?
I’ve found the same thing–if you want to actually accomplish good things in the world, it seems more rational to attach yourself to a religious community than not. I have my own reasons for believing that it’s morally right to help others, but a lot of the non-religious/atheist people my age haven’t really thought about this at all, and religious people my age tend to be VERY involved.
If it’s okay with you, would you mind describing these experiences / perceptions and how they led to your particular beliefs? I’d be quite interested in hearing.
Mundus vult decipi, ergo decipiatur
I know some people who are like the woman you describe, my own folks might be like that to some extent. I became atheist pretty early on. So I’m not sure that adults who believe in belief are likely to be passing that along to their kids, if they even try. In my case, I put on a show for a while, but when I stopped it was no big deal.
If these people are able to agree with a scientific worldview and not be obstructionist on things like stem cell, but simply want to add “and I believe there is a god” to the end of it, fine. Seems like a natural step towards the end of belief in god entirely.
To further illustrate the point that self-deception isn’t easy: if believe you’re shy, you can’t just make yourself believe you’re not shy.
Maybe you can make yourself believe that you believe that you’re not shy, but I don’t think you’ll reap many benefits from placebo effect—you’ll still get nervous when you want to speak up or go talk to a girl you don’t know or whatnot. You can’t argue yourself logically into self-confidence.
It does tend to be counterproductive to directly convince yourself you are not shy. I know I wouldn’t have much luck just willing myself to believe I was self-confident. You can, however argue yourself into self-confidence, if you do so indirectly.
One way argue yourself into self-confidence is to identify sources of bias, noticing irrational thought patterns that lead to the conclusion that you are shy. This is the cornerstone of Rational Emotive Behavioral Therapy and Cognitive Behavioral Therapy in general. For eample, you may observe that you have overgeneralised from one particular incident where you hesitated from nerves. One incident is clearly insufficient evidence. You may also observe that your thinking is being distorted towards pessimism simply because you slept too few hours the previous night!
The other obvious way to persuade yourself that you are not shy is simply to realise that you have just brought the situation into your self awareness. Once the thought is brought to the conscious level it can be simple to consider the situation from a different perspective, perhaps rationally evaluating the risks and rewards. That helps release some of the anxiety. The key there is that you aren’t forcing belief that you are self-confident, but convincing yourself that self-confidence is the rational state to be in. Belief in that self-confidence follows naturally.
Now, this is all well and good for managing social anxiety and certainly a useful tool for improving our dating game. But how exactly does it relate to the quest for belief in belief?
If you can use sound arguments and evidence to change beliefs towards a desired belief in a belief then you can almost certainly use bogus arguments and fictional evidence to grant yourself a belief that you believe something stupid. It’s hard to force belief in believing you’re not shy or belief that you believe in a God. However, the application of focussed rational thought helps the former while the latter is handled by the unconscious irrational wriggling that humans are so talented at.
Why destroy placebo effects? According to some stuff Robin Hanson points to, it seems that most of medicine might consist of placebos. Aren’t you fighting what wins in favor of the truth?
There is evidence that placebos work even if you know that they contain no active ingredients, so we may be spared this interesting dilemma!
Why would we regard an effective placebo as a victory? Why would we want our enemies to profit?
I can think of all sorts of reasons to oppose the existence of a type of person who is made more fit by delusion. Simple eugenics combined with long-term thinking would seem to suggest that we should encourage the destruction of such people.
If you regard those who are not rational as ‘our enemies’, then I suppose that reasoning holds.
a Utilitarian, considering what’s best in the long-term, would certainly prefer people who’ve managed to be made more fit by the truth—delusion is clearly more costly ceteris paribus.
Anyone who accepts an egoistic ethics should accept that the mere fact of them being ‘enemies’ is enough to want them less fit.
Kantians value truth for obvious reasons. Lying is probably the only act to which Kant successfully applied the categorical imperative.
Of course, a certain sort of Altruist might think that making people feel nice now is worth… well, they’d probably stop thinking at that point.
But even given all this, as it turns out I’m one of the ordinary humans that’s aided by placebos, and don’t regard humans as the ‘enemy’. So I’m in favor of placebos, for now. Though I’m doubly in favor of altering human cognitive architecture so that the truth works even better.
Just a data point. I spent over twenty (20) years, thinking multiple hours every week about subjects related to my religion. I was deeply confused, but I needed too badly for it to be true to go earnestly looking for evidence that is was false. Which reminds me of another Yudkowsky quote:
If my religion was false, not only would it mean that the people around me were horrifyingly delusional for believing it, but it would also mean that the wonderful future I was told about would be replaced with the utter destruction of my soul—and everyone’s soul—at death.
As the years passed, very slowly and inevitably, I lost faith. But why did it take me over 20 years between the onset of doubt and my decision to leave the religion? It’s easy to yell out “confirmation bias”. But everyone has that. I think the real problem is that in all that time, no one gave me a link to cesletter.org. I heard lots of atheists hurling cheap insults at believers, belittling them, talking about how obvious it was that they were right and we were wrong. I heard precious few people making strong but fair and compassionate arguments of the sort I needed to hear.
I know it’s been 4 years since your comment, but if I’m reading this many years later there will be others later still.
Another former Mormon here. I also encountered the infuriating prevalence of destructive criticism.
Also of note is the toxicity of places like r/exmormon. A significant portion of those who frequent exmo-specific groups tend to be those who are angry, bitter and still blame the church for everything bad in their life even decades after leaving. Those with a more healthy outlook tend to move on and find better things to do. Those with a less healthy outlook also seem to be more likely to produce Mormon-critical media and infect others with their own biases, dispite having otherwise valid criticism.
Back when I was a questioning member, encountering exmo groups was counter-productive because it only served to feed the confirmation bias of “wow, all these ex-mormons sure are miserable, just like I’ve been told!”
It sounds to me that she simply is using a different definition of “to believe”. If she says “I believe people are nicer than they are,” I think she means something like, “I choose to act as if people are nicer than they really are, because it is consonant with my sense of morality to do so.” It’s choosing to give people the benefit of the doubt, knowing they probably don’t deserve it.
I would much rather think of it the other way around. As far as I know the average person is exactly as nice as the average person is. However, when she said she believed people are nicer than they actually are i guess it is because her estimation of the average niceness of a person is biased and she is actually falsely believing people are worse than they are. This might well be some kind of defense mechanism she developed. Of course if you expect worse than average, the chances of you being positively surprised are way higher than the other way around.
Placebo effects from ‘belief in (false) beliefs’ only work as long as self-deception is maintainable.
I think the point at which self-deception ceases to work is when you can consciously be aware of it breaking your causal models of the world. Highly intelligent people, or anyone for that matter, cannot continue to deceive themselves into believing in god or unregulated markets, or whatever complex concept take your pick, if you explicitly show how it breaks a model they cannot disagree with. Controversial topics of the day like belief in god, public policy, etc. are not single data points under contention, but tangled balls of causation that must be dealt with in a somewhat parallel fashion—to see the big picture and say, wait a minute that cannot fit unless this, and this, and this, and finally reach a dead end and have to relinquish their starting belief. The more abstract or the more complex a concept is, the easier it is to deceive yourself of its falsehood.
The limits to working memory plays a role here, and if we are to truly be less wrong, we not only have to overcome biases, but we need to amplify our rational intelligence by using tools designed for these specific purposes. What if beliefs such as ‘a personal god exists’ were as hard to believe in as ‘the sky is green’? What if it was explicitly laid out in front of someone that they absolutely could not hold a belief in something because of all the cascading links it breaks in their world model that is confirmed to be ‘reality’.
I want to work on such tools.
Voltaire, using rationalist arguments, concluded that “if God did not exist, it would be necessary to invent him”. So could it be that adhering to facts in all situations is essentially an irrational position?
Consider the following statements:
1) Rational humans (unlike rational AI) should aim to be happy.
2) Rational humans should not believe fanciful notions unsupported by empirical evidence.
3) Empirical studies (e.g. http://www.lifesitenews.com/ldn/2008/mar/08031807.html) suggest that humans who believe in such notions are more likely to be happier.
The consequence of the above statements seems to be that a rational human should reject rationality.
Does anyone see flaws in this reasoning?
Are they actually happier, or do they just believe that they’re happier? ;)
What would the actual difference be? You have a subjective view of your emotions (and anything else anyway). so believing you are happy would be the same as being happy, as long as you are not aware of the fact that you are only believing in your happiness.
I think that someone who merely believed they were happy, and then experienced real happiness, would not want to go back.
I suspect that there is a difference, but I’m not extremely confident of this. It seems to me that a noticeable fraction of the people I’ve encountered over my life are in decidedly sub-optimal situations, and could with relative simplicity change to a more optimal lifestyle, yet are convinced that their own lifestyle is the best thing ever.
This is a perfect example of the web that builds itself around even one confusion of a value statement and a factual statement. I fear we all have these lurking.
I agree with both “emotion” and “pretend” hypotheses. It is (according to my world view) extremely difficult to pretend emotions you are not possessing. Thus, the easiest way to pretend your beliefs might be to manipulate your own emotions.
I empathize with her here. I believe that it is in my advantage to act towards people the way I would act if they were nicer than they actually are. I’ll try to parse that out. Let’s say Alice is talking to Bob. Cindy, at a different time, also talks to Bob. Bob is a jerk; we assume he is not nice.
Alice honestly expects that Bob is nicer than he actually is, and accordingly she is nice to Bob.
Cindy honestly expects that Bob is exactly as nice as he actually is, and accordingly she is dismissive of Bob.
I expect that Bob will be nicer towards Alice than towards Cindy. (Warning: This is starting to feel like a belief, suggesting that it is actually a belief in belief.) My theory is that I should act like Alice. Of course, there are alternatives, like simply being to nice to people.
I hope this comment made sense to you. I know I’m pretty confused about it myself now.
I think when you parse this out you realize that there are a lot of other factors at play here, it’s not just a “belief in belief” thing.
Treating someone nicely has an influence on how they subsequently treat you and others. So it’s not so much that you’re believing someone is nice when they’re not, it’s that you’re believing that they do not have a fixed property state of “niceness”, that it is variable dependent on conditions, which you can then manipulate to promote niceness, for the benefit of yourself and others.
None of this is belief in belief. When you look closer you see that you are comparing two different things: how nice Bob has been in the past and how nice Bob will be in the present/future, dependent on what type of environment he is in, and you are thus modifying your behavior on the assumption that your contribution to the environment can make it such that Bob will be nice, or at least nicer. And there is evidence to support this assumption, so it’s not irrational to expect Bob to be(come) nice when treat him nicely accordingly.
It’s just misleading to phrase it as “I benefit from believing perople are nicer than they are,” because what you mean by the first “are” (will be) is not the same as what you mean by the second “are” (have been).
I don’t think that would mislead most people, since most people can handle context and don’t expect ordinary English phraseology to conform to logical rigour.
My point was that it’s misleading to those trying to interpet it directly into a logical statement, which is what Eliezer seemed to be trying to do. I’m sure there are lots of people who could read that sentiment and understand the meaning behind it (or at least a meaning; some people interpret it differently than others). It’s certainly possible to comprehend (obviously, otherwise I wouldn’t have been able to explain it), but the meaning is nevertheless in an ambiguous form, and it did confuse at least some people.
I believe the following five things.
(1) Barcelona will not win the Champions League.
(2) Manchester U will not win the Champions League.
(3) Chelsea will not win the Champions League.
(4) Liverpool will not win the Champions League.
(5) I falsely believe one of the statements (1), (2), (3) and (4).
This seems to me like a reasonable counterexample to Wittgenstein’s doctrine.
You need to work with probabilities, and then make statements about your expected Bayes-score instead of truth or falsity; then you’ll be consistent. I have a post on this but I can’t remember what it’s called.
“Qualitatively Confused.”
topynate: It was only for reasons of space that I listed five events with probability 0.8 each, rather than 1000 events with probability 0.999 each; the modification is obvious.
Eliezer: Point taken.
I think Wittgenstein’s point was that you’re using ‘believe’ in a strange way. I have no idea what you meant by the above comment; you’re effectively claiming to believe and not believe the same statement simultaneously.
If you’re using paraconsitent logic, you should really specify that before making a point, so the rest of us can more efficiently disregard it.
I judge each of the four teams to have probability 0.2 of winning the Champions League. Their victories are mutually exclusive. Hence I judge each of statements (1)-(5) to have probability 0.8.
Hm. Wittgenstein requires that the meaning be “indicative”. In English the indicative mood is used to express statements of fact, or which are very probable. They don’t necessarily have to be true or probable, of course, but they express beliefs of that nature. You say “I believe X” when you assign a probability of at least 0.8 to X; 0.8 is probable, but not very probable. Would you state baldly “Barcelona will not win the Champions League”, given your probabilities? I doubt it. When you say instead “I believe Barcelona will not win the Champions League”, you could equally say “Barcelona will probably not win the Champions League.” But this isn’t in the indicative mood, but rather in something called the potential/tentative mood, which has no special form in English, but does in some other languages, e.g. daro in Japanese (which has quite a complex system for expressing probability). It’s better to just say your degree of belief as a numeric probability.
He is illustrating that “belief” has more than one meaning, for all that he hasn’t clarified the meanings.
A candidate theory would be belief-as-cold-hard-fact versus beliefs-as-hope-and-commitment.
Consider a politican fighting an election. Even if the polls are strongly against them, they can’t admit that they are going to lose as a matter of fact, because that will make the situation worse. They invariably refuse to admit defeat. That is irrational if you treat belief as a solipsistic, pasive registration of facts, but makes perfect sense if you recoginise that beliefs do things in the world and influence other people. If one person commits to something , others can, and that can lead to it becoming a fact.
Treating people as nicer than they are might make them nicer than they were.
Of course , if “belief” does have these two meanings, the argument against dark side epistemolgoy largely unravels...
What i think about here is, that whether or not you care about whether she does right or wrong, to her you are an outsider, one who does not know everything she knows, one who has no insight in what she thinks about the things she does, no insight in what she actually intends to do. So in other words you have no real way of judging her doing to be right or wrong. The only way for her to think of someone to overlook her actions, is to actually believe in an omniscent god, im atheist but i still believe there are good things and bad things for me to do. (might not be a rational thought but i think of it as a neccessary one). In other words my conscience is the being overlooking my doings.
So my guess here would be that she might give her conscience a name and form it in a way to fit in with others people consciences(in other words any religious group whatsoever). To her, god might well be her conscience with a name atheists dont like to hear.
< “Pooping your deranged little party since Epicurus.”
I love that. Did you pick it up somewhere or do I credit you with it?
If you recognize that, in certain terms, believing certain things has positive instrumental results even if they’re not true, why can’t you simply abolish the false beliefs and just create those results directly?
Human brains are (loosely speaking) Universal Turing Machines—they can emulate any computation. So if we’re looking for a particular set of results, we’re not tied to a way to reach them that’s invalid. There’s always a valid path that gets us to where we want to be.
You’d have to be speaking very loosely for that comparison to be correct. Unless you’re talking about creating posthumans, we’re tied to all sorts of non-universal cognitive architecture. You go to war with the brain you have, not the brain you want.
But those good ol’ frontal lobes permit universal computation. We can do it. We’re just not very good at it.
If you can emulate arithmetic, the only limit is memory capacity. Ignore that issue, and you’re a UTM.
I suppose I should grant that—the principle of charity does not permit me to assume anyone thought there was an equivalent to an infinite tape in reality.
I’ve thought a lot about this question. How about this: a small portion of our brain is dedicated to universal computation, and the rest is dedicated to shortcuts/heuristics that allow us to actually function.
Not just loosely speaking—the brain IS a Universal Turing Machine. Or at least as much a one as currently exists—the key definition is the universality of computations—the infinite tape is a visualization mechanism.
Is that you, Caledonian? (Said without looking at email address.)
I Believe this will be the next form of religion.