No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily.
The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous.
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
“Just try my things!” you say,
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say:
A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
The doctor should say “This is the best painkiller I have” and administer it.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say: A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
The doctor should say “This is the best painkiller I have” and administer it. If the patient confronts the question, it’s already too late.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
“Pain will go away” is a true belief for this situation.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Thanks for sharing.
It all makes me think of the beauty queens—and their wishes for world peace.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
Which approach would you use?
Not true; $2500 is not necessarily 2.5 times as useful as $1000.
http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.