Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say:
A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
The doctor should say “This is the best painkiller I have” and administer it.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say: A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
The doctor should say “This is the best painkiller I have” and administer it. If the patient confronts the question, it’s already too late.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
“Pain will go away” is a true belief for this situation.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Thanks for sharing.
It all makes me think of the beauty queens—and their wishes for world peace.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
Which approach would you use?
Not true; $2500 is not necessarily 2.5 times as useful as $1000.
http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.