When it comes to deliberate self-deception, you must believe in your own inability!
Tell yourself the effort is doomed—and it will be!
Is that the power of positive thinking, or the power of negative thinking? Either way, it seems like a wise precaution.
The positive power of negative thinking. There is a book waiting to happen. Scratch that, google tells me the title is already taken. Either way, the idea is fascinating.
Just what is the difference between deceiving yourself and ‘positive thinking’? It is clear that Eleizer advocates telling yourself things that may not actually be true. You may tell yourself “I cannot believe what I know is not true”. In some cases you may know yourself well enought to estimate that there is only a 40% chance that the claim could ever reasonably qualify as true no matter how dilligent your pep-talking may be, yet it may still be worth a try. On first glance that seems like it is 60% self deception. Yet there is some sort of difference.
When we go about affirming to ourself that “I am charming, assertive, have an overwhelming instinct to maintain reflective consistency and am irresistible to the opposite sex” we are not so much lieing as we are using the mechanics of our brains to alter our computational hardware to an improved state. But then, a believer could plausibly use the same defence.
Is it the potential for self fullfillment that makes our not-quite-truths ‘ok’? We know that by telling ourselves we are assertive or that we can’t stand to bullshit ourselves we probably do influence these traits somewhat. Yet again, the more we know ourselves the more we are able to know just to what extent we will be able to modify our cognitive behaviors. If we know that we’ll never have the desired trait to a respectable degree then we have less scope to affirm ourselves without blatant lies. Having more self awareness would limit our options for self improvement. Now, there may be something to that connection, but it isn’t something I would want to formalise into my understanding of what constitutes ‘self deception’.
Could it be that these affirmative non-truths are different because they are self referential? When Eleizer delved into subjectivity he etched into my mind the quote from Robert Dick, “Reality is that which, when you stop believing in it, doesn’t go away”. We could almost argue that because we are talking about things that change based on what we believe, we are outside the scope of reality so have free reign. Almost. It still seems to me that as a statement of the state of the universe, “I can’t fool myself!” may objectively be nonsense both as a current observation and as a prediction of the future and yet still be worth saying to yourself. That’s right. “I can’t fool myself and even though I can you’ll probably believe me anyway, which helps, so knock 5% off the probability that I’ll be able to believe something really idiotic. Thanks, bye.”
Maybe the central difference is just that it’s a “white lie”. If the goal is to create the most accurate map of reality it is quite possibly the case that the optimal strategy is to believe certain false things. Try limitting yourself to only ideally rational behaviors and you may well end up less rational than if you’d taken a few liberties and made allowances for your weaknesses.
It is clear that Eleizer advocates telling yourself things that may not actually be true.
I don’t think so. He is advocating telling yourself something on the condition that telling it to yourself causes it to be true.
It’s not equivalent to telling yourself “I’m attractive to the opposite sex.” Say that you doubted this prior to uttering it. Then, yes, after uttering it, you might have reason to think that it is marginally more likely to be true. But you almost certainly wouldn’t be justified in believing it with high confidence. That is, you still shouldn’t believe the statement, so telling it to yourself is dishonest.
In contrast, Eliezer is suggesting that perhaps regularly uttering the statement
I can’t get away with double-thinking! Deep down, I’ll know it’s not true! If I know my map has no reason to be correlated with the territory, that means I don’t believe it!
does alter you so as to make itself true. If that’s right, then, conditioned on your having uttered it, you are justified in believing what you uttered, so you are not being dishonest.
It’s not a matter of being outside of reality. The utterance is part of reality. That’s precisely why it may have the power to cause itself to be true.
Of course, it may be that this particular statement just doesn’t have that power. If the probability of that were above a certain threshold, I expect that Eliezer wouldn’t advocate saying it unless it’s true already.
What evidence is there that yelling at yourself like this is going to make a difference? Let us imagine two kinds of people: those who cannot fall into Moore’s paradox (believing the map but not the territory) and those who can. People in the first class, who are immune to the problem, will gain no benefit from reciting these mantras. People in the second class, for whom there is a real risk of making these kinds of errors, are supposed to vigorously tell themselves that there is no such risk! They are supposed to lie to themselves in the hope that the lie will become true. But why should they believe it?
And how different is this lie, really, from the wannabe god-worshiper who similarly insists to himself that he believes that god exists, even though it is not true?
I can’t help wondering whether this posting is meant to be ironic. It comes perilously close to outright self-contradiction.
Hal, perhaps Eliezer’s view is that there are “suggestible” portions of one’s mind that it is okay to suggest things to, but there is some other, reason-capable faculty that one can and should use to form true, un-self-deceived, evidence before bottom line, beliefs.
Whether or not that’s Eliezer’s view, the above view seems right to me. It would be silly not to suggest useful frames, emotional stances, energy levels, etc. to the less rational parts of myself—that would leave me freezing in particular, arbitrary/chance/un-useful starting states. But for the part of myself that can do full cost-benefit analyses, and math, and can assemble my best guess about the world—misleading that part of myself would be terrifying, like putting my eyes out. (I mean, I deceive the reason-capable part of myself all the time, like most humans. But it’s terrifying that I do, and I really really want to do otherwise… including by suggestibility tricks, if they turn out to help.)
Tyrrell and Anna have stated my views better than I’d previously gone so far as verbalizing.
There are large sectors of the mind in which belief tends to become reality, including important things like “I am the sort of person who continues even in the face of adversity” and “I do have the willpower to pass up that cookie.”
But—given that you aren’t actually trying to fool yourself—there’s a chicken-and-egg aspect that depends on your having enough potential in this area that you can legitimately believe the statement will become true if you believe it. At that point, you can believe it and then it will be true.
There’s an interesting analogy here to Lob’s Theorem which I haven’t yet categorized as legitimate or fake.
To look at it another way, this sort of thing is useful for taking simultaneous steps of self-confidence and actual capability in cases where the two move in lockstep. Or, in the case of anti-competencies like doublethink, the reverse.
“I have the potential to be the sort of person who continues even in the face of adversity”, or “it is more in my interests to pass up that cookie”, or “I really do have a choice whether or not to pass up that cookie”. That is what I would recommend.
bill, below, has mentioned “Act as if”: “I choose to Act as If I can continue even in the face of adversity, and I intend in this precise moment to continue acting, even if I may just fall down again in two minutes’ time”.
These have the advantages of being more likely to be true.
Rambling on a little, to be the sort of person who continues in the face of adversity is Difficult, and requires practice, and that practice is very worthwhile. Stating that it is True might make you fail to do the practice, and instead beat yourself up when it appears not to be true.
Dishonest or not, convincing yourself that you’re attractive to the opposite sex is more likely to produce a positive result. And a rationalist should win. ;-)
Sorry for the pedantry, but I believe that’s Philip K. Dick’s quote.
To the “sky is green” idea, I’d counter that the verification path might not work for converting people to atheism. Mormons for instance, suggest to people they will feel a burning in their heart when they read the Book of Mormon, which proves the books veracity. You need to logically piece together that any such physical sensation wouldn’t be sufficient to objectively verify anything. There isn’t an easy falsification of religious/magical thinking, just following chains of inference from observation. Non-believers just make a commitment to the minimal contortion of facts to fit their paradigm. As obvious as the Silence seems to be, some people don’t seem to hear it.
The positive power of negative thinking. There is a book waiting to happen. Scratch that, google tells me the title is already taken. Either way, the idea is fascinating.
Just what is the difference between deceiving yourself and ‘positive thinking’? It is clear that Eleizer advocates telling yourself things that may not actually be true. You may tell yourself “I cannot believe what I know is not true”. In some cases you may know yourself well enought to estimate that there is only a 40% chance that the claim could ever reasonably qualify as true no matter how dilligent your pep-talking may be, yet it may still be worth a try. On first glance that seems like it is 60% self deception. Yet there is some sort of difference.
When we go about affirming to ourself that “I am charming, assertive, have an overwhelming instinct to maintain reflective consistency and am irresistible to the opposite sex” we are not so much lieing as we are using the mechanics of our brains to alter our computational hardware to an improved state. But then, a believer could plausibly use the same defence.
Is it the potential for self fullfillment that makes our not-quite-truths ‘ok’? We know that by telling ourselves we are assertive or that we can’t stand to bullshit ourselves we probably do influence these traits somewhat. Yet again, the more we know ourselves the more we are able to know just to what extent we will be able to modify our cognitive behaviors. If we know that we’ll never have the desired trait to a respectable degree then we have less scope to affirm ourselves without blatant lies. Having more self awareness would limit our options for self improvement. Now, there may be something to that connection, but it isn’t something I would want to formalise into my understanding of what constitutes ‘self deception’.
Could it be that these affirmative non-truths are different because they are self referential? When Eleizer delved into subjectivity he etched into my mind the quote from Robert Dick, “Reality is that which, when you stop believing in it, doesn’t go away”. We could almost argue that because we are talking about things that change based on what we believe, we are outside the scope of reality so have free reign. Almost. It still seems to me that as a statement of the state of the universe, “I can’t fool myself!” may objectively be nonsense both as a current observation and as a prediction of the future and yet still be worth saying to yourself. That’s right. “I can’t fool myself and even though I can you’ll probably believe me anyway, which helps, so knock 5% off the probability that I’ll be able to believe something really idiotic. Thanks, bye.”
Maybe the central difference is just that it’s a “white lie”. If the goal is to create the most accurate map of reality it is quite possibly the case that the optimal strategy is to believe certain false things. Try limitting yourself to only ideally rational behaviors and you may well end up less rational than if you’d taken a few liberties and made allowances for your weaknesses.
I don’t think so. He is advocating telling yourself something on the condition that telling it to yourself causes it to be true.
It’s not equivalent to telling yourself “I’m attractive to the opposite sex.” Say that you doubted this prior to uttering it. Then, yes, after uttering it, you might have reason to think that it is marginally more likely to be true. But you almost certainly wouldn’t be justified in believing it with high confidence. That is, you still shouldn’t believe the statement, so telling it to yourself is dishonest.
In contrast, Eliezer is suggesting that perhaps regularly uttering the statement
does alter you so as to make itself true. If that’s right, then, conditioned on your having uttered it, you are justified in believing what you uttered, so you are not being dishonest.
It’s not a matter of being outside of reality. The utterance is part of reality. That’s precisely why it may have the power to cause itself to be true.
Of course, it may be that this particular statement just doesn’t have that power. If the probability of that were above a certain threshold, I expect that Eliezer wouldn’t advocate saying it unless it’s true already.
What evidence is there that yelling at yourself like this is going to make a difference? Let us imagine two kinds of people: those who cannot fall into Moore’s paradox (believing the map but not the territory) and those who can. People in the first class, who are immune to the problem, will gain no benefit from reciting these mantras. People in the second class, for whom there is a real risk of making these kinds of errors, are supposed to vigorously tell themselves that there is no such risk! They are supposed to lie to themselves in the hope that the lie will become true. But why should they believe it?
And how different is this lie, really, from the wannabe god-worshiper who similarly insists to himself that he believes that god exists, even though it is not true?
I can’t help wondering whether this posting is meant to be ironic. It comes perilously close to outright self-contradiction.
Hal, perhaps Eliezer’s view is that there are “suggestible” portions of one’s mind that it is okay to suggest things to, but there is some other, reason-capable faculty that one can and should use to form true, un-self-deceived, evidence before bottom line, beliefs.
Whether or not that’s Eliezer’s view, the above view seems right to me. It would be silly not to suggest useful frames, emotional stances, energy levels, etc. to the less rational parts of myself—that would leave me freezing in particular, arbitrary/chance/un-useful starting states. But for the part of myself that can do full cost-benefit analyses, and math, and can assemble my best guess about the world—misleading that part of myself would be terrifying, like putting my eyes out. (I mean, I deceive the reason-capable part of myself all the time, like most humans. But it’s terrifying that I do, and I really really want to do otherwise… including by suggestibility tricks, if they turn out to help.)
Tyrrell and Anna have stated my views better than I’d previously gone so far as verbalizing.
There are large sectors of the mind in which belief tends to become reality, including important things like “I am the sort of person who continues even in the face of adversity” and “I do have the willpower to pass up that cookie.”
But—given that you aren’t actually trying to fool yourself—there’s a chicken-and-egg aspect that depends on your having enough potential in this area that you can legitimately believe the statement will become true if you believe it. At that point, you can believe it and then it will be true.
There’s an interesting analogy here to Lob’s Theorem which I haven’t yet categorized as legitimate or fake.
To look at it another way, this sort of thing is useful for taking simultaneous steps of self-confidence and actual capability in cases where the two move in lockstep. Or, in the case of anti-competencies like doublethink, the reverse.
“I have the potential to be the sort of person who continues even in the face of adversity”, or “it is more in my interests to pass up that cookie”, or “I really do have a choice whether or not to pass up that cookie”. That is what I would recommend.
bill, below, has mentioned “Act as if”: “I choose to Act as If I can continue even in the face of adversity, and I intend in this precise moment to continue acting, even if I may just fall down again in two minutes’ time”.
These have the advantages of being more likely to be true.
Rambling on a little, to be the sort of person who continues in the face of adversity is Difficult, and requires practice, and that practice is very worthwhile. Stating that it is True might make you fail to do the practice, and instead beat yourself up when it appears not to be true.
Dishonest or not, convincing yourself that you’re attractive to the opposite sex is more likely to produce a positive result. And a rationalist should win. ;-)
Sorry for the pedantry, but I believe that’s Philip K. Dick’s quote.
To the “sky is green” idea, I’d counter that the verification path might not work for converting people to atheism. Mormons for instance, suggest to people they will feel a burning in their heart when they read the Book of Mormon, which proves the books veracity. You need to logically piece together that any such physical sensation wouldn’t be sufficient to objectively verify anything. There isn’t an easy falsification of religious/magical thinking, just following chains of inference from observation. Non-believers just make a commitment to the minimal contortion of facts to fit their paradigm. As obvious as the Silence seems to be, some people don’t seem to hear it.