Indeed. It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He’s right that we should be prepared to sometimes be wrong in that sense. But I’m not convinced anyone else is interpreting “less wrong” in that way.
It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono’s notion of “proto-truth”—i.e., a truth you accept as provisional, rather than absolute.)
(DeBono’s notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that’s knowingly more wrong. His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily.
The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous.
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
“Just try my things!” you say,
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say:
A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
The doctor should say “This is the best painkiller I have” and administer it.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
[boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there’s tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively.
Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one’s self or others, that’s tied to reasonably timely feedback.
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
And the truth is not a substitute for that, however much blind faith you put into it.
The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why… and the why is only useful if it helps you to believe in something enough to make you actually DO something.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Why do you think this has anything to do with me?
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true. [...] The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning.
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
Really? All self-described teachers of practical arts have the comfort of systematized winning?
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I referred only to people who have some systematized winning.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it?
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it?
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
If your point was merely “some deluded people win”… then I’m not sure I get your point.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
“I first need a true belief T that U is useful”.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
My, you are confident in your theories of human motivation.
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.
believing things that aren’t “true”, no matter how useful it may be to believe them
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can’t statistically validate… some even if the belief has a high probability of making itself come true in the future.
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
My immediate reaction to this paragraph is skepticism that I can believe something, if I don’t believe the evidence weighs in its favor; other people might be able to choose what they believe, but I’ve internalized proper epistemology well enough that it’s beyond me. On reflection, though, while I think there is some truth to this, it’s also a cached oversimplification that derives its strength from being part of my identity as a rationalist.
Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that’s hard to prove.
Indeed. It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He’s right that we should be prepared to sometimes be wrong in that sense. But I’m not convinced anyone else is interpreting “less wrong” in that way.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono’s notion of “proto-truth”—i.e., a truth you accept as provisional, rather than absolute.)
(DeBono’s notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that’s knowingly more wrong. His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say: A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
The doctor should say “This is the best painkiller I have” and administer it. If the patient confronts the question, it’s already too late.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
“Pain will go away” is a true belief for this situation.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Thanks for sharing.
It all makes me think of the beauty queens—and their wishes for world peace.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
Which approach would you use?
Not true; $2500 is not necessarily 2.5 times as useful as $1000.
http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
[boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there’s tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively.
Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one’s self or others, that’s tied to reasonably timely feedback.
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
And the truth is not a substitute for that, however much blind faith you put into it.
The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why… and the why is only useful if it helps you to believe in something enough to make you actually DO something.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can’t statistically validate… some even if the belief has a high probability of making itself come true in the future.
My immediate reaction to this paragraph is skepticism that I can believe something, if I don’t believe the evidence weighs in its favor; other people might be able to choose what they believe, but I’ve internalized proper epistemology well enough that it’s beyond me. On reflection, though, while I think there is some truth to this, it’s also a cached oversimplification that derives its strength from being part of my identity as a rationalist.
Related to: Belief in Self-Deception, Litany of Tarski.
Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that’s hard to prove.
Thank you for eloquently demonstrating precisely what I’m talking about.