What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Thanks for sharing.
It all makes me think of the beauty queens—and their wishes for world peace.