Could more people please share data on how one of the above techniques, or some other technique for reducing consistency pressures, has actually helped their rationality? Or how such a technique has harmed their rationality, or has just been a waste of time? The techniques list is just a list of guesses, and while I’m planning on using more of them than I have been using… it would be nice to have even anecdotal data on what helps and doesn’t help.
For example, many of you write anonymously; what effects do you notice from doing so?
Or what thoughts do you have regarding Michael Vassar’s suggestion to practice lying?
Or what thoughts do you have regarding Michael Vassar’s suggestion to practice lying?
(Reusing an old joke)
Q: What’s the difference between a creationist preacher and a rationalist?
A: The rationalist knows when he’s lying.
I’m having trouble resolving 2a and 3b.
2a. Hyper-vigilant honesty. Take care never to say anything but what is best supported by the evidence, aloud or to yourself, lest you come to believe it.
3b. Build emotional comfort with lying, so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience. Perhaps follow Michael Vassar’s suggestion to lie on purpose in some unimportant contexts.
I find myself rejecting 3b as a useful practice because:
What I think will be an unimportant and undetectable lie has a finite probability of being detected and considered important by someone whose confidence I value. See Entangled Truths, Contagious Lies
This post points out the dangers of self-delusion from motivated small lies e.g. “if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.” Is there any evidence to show that I’ll be safer from my own lies if I deliberately tag them at the time I tell them?
Building rationalism as a movement to improve humanity doesn’t need to be encumbered by accusations that the movement encourages dishonesty. Even though one might justify the practice of telling unimportant lies as a means to prevent a larger more problematic bias, advocating lies at any level is begging to be quote-mined and portrayed as fundamentally immoral.
The justification for 3b (“so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience.”) doesn’t work for me. I don’t know if I’m different, but I find that I have far more respect for people (particularly politicians) who admit they were wrong.
Rather than practising being emotionally comfortable lying, I’d rather practise being comfortable with acknowledging fallibility.
I did preface my list with “I’m not recommending these, just putting them out there for consideration”. 2a and 3b contradict one another in the sense that one cannot fully practice both 2a and 3b; but each is worth considering. Also, many of us could do more of both 2a and 3b than we currently do—we could be more careful to really only every tell ourselves what’s best supported by the evidence (rather than pleasant rationalizations), and to mostly only say this to others as well, while also making the option of lying more cognitively available.
Is there any evidence to show that I’ll be safer from my own lies if I deliberately tag them at the time I tell them?
There’s good evidence that people paid $20 to lie were less likely to believe their lie than people paid a mere $1 to lie. And similarly in a variety of other studies: people under strong, visible external pressure to utter particular types of speech are less likely to later believe that speech. It’s plausible, though not obvious, that people who see themselves as intentionally manipulating others, as continually making up contradictory stories, etc. will also be less likely to take their own words as true.
Building rationalism as a movement to improve humanity doesn’t need to be encumbered by accusations that the movement encourages dishonesty.
I agree this is a potential concern.
Rather than practising being emotionally comfortable lying, I’d rather practise being comfortable with acknowledging fallibility.
Vassar’s suggestion isn’t designed to help one avoid noticing one’s own past mistakes. That one really wouldn’t work for a rationalist. It’s designed to let you seriously consider ideas that others may disapprove of, while continuing to function in ordinary social environments, i.e. social environments that may demand lip service to said ideas. See my comment here.
For example, many of you write anonymously; what effects do you notice from doing so?
I’ve noticed that when I’m anonymous, I avoid expressing most of the opinions I’d avoid expressing when I’m not anonymous, but because they might be seen to reflect badly on my other beliefs rather than on me. I should probably try a throwaway account, but among other things I’d still worry about how it might reflect on the community of rationalists in general.
I haven’t noticed consistency pressures as such ever affecting me much, but I should watch more closely and the list of suggested solutions looks great.
Or what thoughts do you have regarding Michael Vassar’s suggestion to practice lying?
Out of the question. Lying is illegal where I live.
I use 2a when socializing with multiple polarized ideological camps, e.g. libertarians and social democrats. I can criticize particular fallacies and rationality errors of the other camp in terms of failure to conform to general principles, and when I do this I think of the ways in which my current interlocutors’ camp also abuses those principles. I find that doing this helps me keep my reactions more level than if I mention errors in terms of idiosyncratic problems (e.g. specific interest groups associated with only one faction).
For example, many of you write anonymously; what effects do you notice from doing so?
Within this community? Virtually none.
There is a difference between pseudonymity and anonymity. I may not attach my real name to posts here, but I would be deluding myself to think I could disregard external social pressures from within the communities where this handle is used. True anonymity is a very different beast.
I am inclined to say that there’s an increased tendency to regard dialogue as impersonal, as each truly anonymous post feels more like a one-shot contribution with no expectation of consistency or interaction, and a reduced tendency to identify directly with what you write. e.g., I may be more likely to post what I’m thinking in the moment without worrying about having to defend the position, or caring if I change my mind later. However, I strongly suspect there are confounding factors and I don’t make a frequent habit of posting anonymously (Robin would probably suggest that I am too driven by status-seeking) so I can’t speak terribly authoritatively on whether these impressions are accurate or if I’m repeating what I think “ought” to be the case.
For instance, one possible distracting issue is that group consensus seems to me more persuasive with increasingly anonymous discussions, and the resulting undercurrent of mob mentality presents an entirely different failure of rationality.
I’ve used throwaway handles to argue for views that I’m not convinced of, both to shake myself out of consistency pressures/confirmation bias and to elicit good criticism. I find that the latter is particularly helpful.
Could more people please share data on how one of the above techniques, or some other technique for reducing consistency pressures, has actually helped their rationality? Or how such a technique has harmed their rationality, or has just been a waste of time? The techniques list is just a list of guesses, and while I’m planning on using more of them than I have been using… it would be nice to have even anecdotal data on what helps and doesn’t help.
For example, many of you write anonymously; what effects do you notice from doing so?
Or what thoughts do you have regarding Michael Vassar’s suggestion to practice lying?
(Reusing an old joke) Q: What’s the difference between a creationist preacher and a rationalist? A: The rationalist knows when he’s lying.
I’m having trouble resolving 2a and 3b.
I find myself rejecting 3b as a useful practice because:
What I think will be an unimportant and undetectable lie has a finite probability of being detected and considered important by someone whose confidence I value. See Entangled Truths, Contagious Lies
This post points out the dangers of self-delusion from motivated small lies e.g. “if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.” Is there any evidence to show that I’ll be safer from my own lies if I deliberately tag them at the time I tell them?
Building rationalism as a movement to improve humanity doesn’t need to be encumbered by accusations that the movement encourages dishonesty. Even though one might justify the practice of telling unimportant lies as a means to prevent a larger more problematic bias, advocating lies at any level is begging to be quote-mined and portrayed as fundamentally immoral.
The justification for 3b (“so you won’t be tempted to rationalize your last week’s false claim, or your next week’s political convenience.”) doesn’t work for me. I don’t know if I’m different, but I find that I have far more respect for people (particularly politicians) who admit they were wrong.
Rather than practising being emotionally comfortable lying, I’d rather practise being comfortable with acknowledging fallibility.
I did preface my list with “I’m not recommending these, just putting them out there for consideration”. 2a and 3b contradict one another in the sense that one cannot fully practice both 2a and 3b; but each is worth considering. Also, many of us could do more of both 2a and 3b than we currently do—we could be more careful to really only every tell ourselves what’s best supported by the evidence (rather than pleasant rationalizations), and to mostly only say this to others as well, while also making the option of lying more cognitively available.
There’s good evidence that people paid $20 to lie were less likely to believe their lie than people paid a mere $1 to lie. And similarly in a variety of other studies: people under strong, visible external pressure to utter particular types of speech are less likely to later believe that speech. It’s plausible, though not obvious, that people who see themselves as intentionally manipulating others, as continually making up contradictory stories, etc. will also be less likely to take their own words as true.
I agree this is a potential concern.
Vassar’s suggestion isn’t designed to help one avoid noticing one’s own past mistakes. That one really wouldn’t work for a rationalist. It’s designed to let you seriously consider ideas that others may disapprove of, while continuing to function in ordinary social environments, i.e. social environments that may demand lip service to said ideas. See my comment here.
I’ve noticed that when I’m anonymous, I avoid expressing most of the opinions I’d avoid expressing when I’m not anonymous, but because they might be seen to reflect badly on my other beliefs rather than on me. I should probably try a throwaway account, but among other things I’d still worry about how it might reflect on the community of rationalists in general.
I haven’t noticed consistency pressures as such ever affecting me much, but I should watch more closely and the list of suggested solutions looks great.
Out of the question. Lying is illegal where I live.
Where is lying illegal? That sounds so terribly illiberal that I’d like to avoid even visiting there, if possible.
I think he was lying.
I use 2a when socializing with multiple polarized ideological camps, e.g. libertarians and social democrats. I can criticize particular fallacies and rationality errors of the other camp in terms of failure to conform to general principles, and when I do this I think of the ways in which my current interlocutors’ camp also abuses those principles. I find that doing this helps me keep my reactions more level than if I mention errors in terms of idiosyncratic problems (e.g. specific interest groups associated with only one faction).
Within this community? Virtually none.
There is a difference between pseudonymity and anonymity. I may not attach my real name to posts here, but I would be deluding myself to think I could disregard external social pressures from within the communities where this handle is used. True anonymity is a very different beast.
Thanks for the info. Have you tried writing with throw-away handles? Do you find you think differently under those circumstances?
I am inclined to say that there’s an increased tendency to regard dialogue as impersonal, as each truly anonymous post feels more like a one-shot contribution with no expectation of consistency or interaction, and a reduced tendency to identify directly with what you write. e.g., I may be more likely to post what I’m thinking in the moment without worrying about having to defend the position, or caring if I change my mind later. However, I strongly suspect there are confounding factors and I don’t make a frequent habit of posting anonymously (Robin would probably suggest that I am too driven by status-seeking) so I can’t speak terribly authoritatively on whether these impressions are accurate or if I’m repeating what I think “ought” to be the case.
For instance, one possible distracting issue is that group consensus seems to me more persuasive with increasingly anonymous discussions, and the resulting undercurrent of mob mentality presents an entirely different failure of rationality.
I’ve used throwaway handles to argue for views that I’m not convinced of, both to shake myself out of consistency pressures/confirmation bias and to elicit good criticism. I find that the latter is particularly helpful.