That it’s not aimed at being “more right”—which is not at all the same as being less wrong.
To be more right often requires you to first be more wrong. Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.
In contrast, you can be “less wrong” just by doing nothing, or by being a critic of those who do something.′ But in the real world (and even in science), you can never win BIG—and it’s often hard to win at all—if you never place any bets.
When it comes to knowledge of the world you want to be more right.
But when it comes to reasoning I do think it is more about being less wrong… there are so many traps you can fall into, and learning how to avoid them is so much of being able to reason effectively.
Indeed. It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He’s right that we should be prepared to sometimes be wrong in that sense. But I’m not convinced anyone else is interpreting “less wrong” in that way.
It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono’s notion of “proto-truth”—i.e., a truth you accept as provisional, rather than absolute.)
(DeBono’s notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that’s knowingly more wrong. His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily.
The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous.
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
“Just try my things!” you say,
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say:
A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
The doctor should say “This is the best painkiller I have” and administer it.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
[boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there’s tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively.
Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one’s self or others, that’s tied to reasonably timely feedback.
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
And the truth is not a substitute for that, however much blind faith you put into it.
The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why… and the why is only useful if it helps you to believe in something enough to make you actually DO something.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Why do you think this has anything to do with me?
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true. [...] The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning.
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
Really? All self-described teachers of practical arts have the comfort of systematized winning?
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I referred only to people who have some systematized winning.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it?
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it?
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
If your point was merely “some deluded people win”… then I’m not sure I get your point.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
“I first need a true belief T that U is useful”.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
My, you are confident in your theories of human motivation.
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.
believing things that aren’t “true”, no matter how useful it may be to believe them
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can’t statistically validate… some even if the belief has a high probability of making itself come true in the future.
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
My immediate reaction to this paragraph is skepticism that I can believe something, if I don’t believe the evidence weighs in its favor; other people might be able to choose what they believe, but I’ve internalized proper epistemology well enough that it’s beyond me. On reflection, though, while I think there is some truth to this, it’s also a cached oversimplification that derives its strength from being part of my identity as a rationalist.
Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that’s hard to prove.
In contrast, you can be “less wrong” just by doing nothing, or by being a
critic of those who do something.′ But in the real world (and even in science),
you can never win BIG—and it’s often hard to win at all—if you never place any
bets.
To expand a little on what timtyler said, I think you’re mixing up beliefs and actions.
Doing nothing doesn’t make your beliefs less wrong, and placing bets doesn’t make your beliefs more right (or wrong).
Wanting to be ‘less wrong’ doesn’t mean you should be conservative in your actions.
That it’s not aimed at being “more right”—which is not at all the same as being less wrong.
I’ve also had mixed feelings about the concept of being “less wrong.” Anyone else?
Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase “less wrong” might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it’s possible to make less mistakes or less egregious mistakes.
Yet “less wrong” and “overcoming bias” sound kind of like “playing to not lose,” rather than “playing to win.” There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer’s excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place.
The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable.
If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail.
The ultimate way to be “less wrong” is radical skepticism. As a fallibilist, I am fully aware that we may never know when or if we are finding the truth, but I do think we can use heuristic to move towards it, rather than merely trying to move away from falsehood and hoping we bump into the truth backwards. That’s why I’ve been writing about heuristic here and here, and why I am glad to see Alicorn writing about heuristics to achieve procedural knowledge.
For certain real-world projects that shall-not-be-named to succeed, we will need to have some great cognitive and epistemological successes, not merely avoid failures.
If we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
Yes, this is exactly the point I was making.
Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
At this point it may occur to some readers that there’s an obvious way to achieve perfect calibration—just flip a coin for every yes-or-no question, and assign your answer a confidence of 50%. You say 50% and you’re right half the time. Isn’t that perfect calibration? Yes. But calibration is only one component of our Bayesian score; the other component is discrimination.
Suppose I ask you ten yes-or-no questions. You know absolutely nothing about the subject, so on each question you divide your probability mass fifty-fifty between “Yes” and “No”. Congratulations, you’re perfectly calibrated—answers for which you said “50% probability” were true exactly half the time. This is true regardless of the sequence of correct answers or how many answers were Yes. In ten experiments you said “50%” on twenty occasions—you said “50%” to Yes-1, No-1; Yes-2, No-2; …. On ten of those occasions the answer was correct, the occasions: Yes-1; No-2; No-3; …. And on ten of those occasions the answer was incorrect: No-1; Yes-2; Yes-3; …
Now I give my own answers, putting more effort into it, trying to discriminate whether Yes or No is the correct answer. I assign 90% confidence to each of my favored answers, and my favored answer is wrong twice. I’m more poorly calibrated than you. I said “90%” on ten occasions and I was wrong two times. The next time someone listens to me, they may mentally translate “90%” into 80%, knowing that when I’m 90% sure I’m right about 80% of the time. But the probability you assigned to the final outcome is 1⁄2 to the tenth power, 0.001 or 1/1024. The probability I assigned to the final outcome is 90% to the eighth power times 10% to the second power, (0.9^8)*(0.1^2), which works out to 0.004 or 0.4%. Your calibration is perfect and mine isn’t, but my better discrimination between right and wrong answers more than makes up for it. My final score is higher—I assigned a greater joint probability to the final outcome of the entire experiment. If I’d been less overconfident and better calibrated, the probability I assigned to the final outcome would have been 0.8^8 * 0.2^2, 0.006.
Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there’s a difference between affirming fewer false beliefs and more true ones.
The fact that psychology can explain how the phrase “less wrong” can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
That’s what I’ve been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it’s well established that humans can have both utility and disutility, that they’re not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?
It’s not rational to expect human beings to treat “less wrong” as meaning the same thing (in behavioral terms) as “more right”. Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think “avoiding a predator” versus “hunting for food”.
The idea that we can simultaneously have approach and avoidance behaviors and they’re differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.
Heck, I’d be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms “less wrong” and “more right”, no matter how “rational” they rate themselves to be.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”
You bet: there’s the one where you can be “less wrong” by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.
You missed the point. The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered “reversed intelligence”. As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
It’s not a caricature of the actual behavior of many of its members.… which notably does not live up to that ideal.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true… especially with respect to the things we would prefer to believe are true about ourselves and others.
A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false—even for a moment, even in a higher cause, is unthinkable.
Is that a caricature of the Bayesian ideal? No kidding. But I’m not the one who’s drawing it.
As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Heck, not just why they shouldn’t try something, but why noone should ever try anything that isn’t proven. Why, thinking a new thought might be dangerous!
And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren’t saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.)
What’s that saying about how, if given a choice between changing their mind and trying to prove they don’t need to, most people get busy on the proof?
So, “never believing anything” means having unwavering certainty?
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.
I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose.
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.)
They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work.
Strongly agreed. Generally, though, I agree with pjeby’s conclusion (tentatively, but only because so many others here disagree).
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter.
Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can’t easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it’s very weak.
Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, “willpower to engage in psychological modification” seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
It’s really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people’s lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.
What is the evidence that empirical rationality is more likely to be helpful than harmful?
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to “consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true”.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right.
Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you.
Second, if this actually were about “my” ideas (and it isn’t), I’ve certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them.
But very few of “my” ideas are new in any event—I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.)
And I’m not aware that any of these ideas is particularly controversial in the scientific community. Nobody ’s pushing for more individual empirical testing per se, but the “brief therapy” movement that resulted in things like CBT is certainly more focused that direction than before.
(The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they’re online papers or Wikipedia. Is it any wonder that I would then conclude they didn’t really want the references?)
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Those are the products of rationalism. I’m asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
It is if you also apply the status quo bias to choose which evidence to count.
You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping
I really wish people wouldn’t conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here.
(Edited to say “extreme rationalism”, as suggested by Nick Tarleton.)
I’m asking about evidence that the practice of rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
Evidence is demanded for communicating the change in preferred decision.
If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I’m getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies.
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion. I consider what it’d take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That’s what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.
Evidence is demanded for communicating the change in preferred decision.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion.
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
That might help—if the basic underlying theory of eating to avoid famine is correct.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.
That it’s not aimed at being “more right”—which is not at all the same as being less wrong.
To be more right often requires you to first be more wrong. Whether you try something new or try to formulate a model or hypothesis, you must at minimum be prepared for the result to be more wrong at first.
In contrast, you can be “less wrong” just by doing nothing, or by being a critic of those who do something.′ But in the real world (and even in science), you can never win BIG—and it’s often hard to win at all—if you never place any bets.
This is perhaps a useful disctinction:
When it comes to knowledge of the world you want to be more right.
But when it comes to reasoning I do think it is more about being less wrong… there are so many traps you can fall into, and learning how to avoid them is so much of being able to reason effectively.
Well said.
The group title is attempting to be modest—which is cool.
Disagree. You don’t have to believe your new model or hypothesis.
Indeed. It seems that PJEby is using a definition of “wrong” according to which I am wrong if I act in a way that is implied by certain belief in some false proposition, and that is not implied by certain disbelief in that proposition. He’s right that we should be prepared to sometimes be wrong in that sense. But I’m not convinced anyone else is interpreting “less wrong” in that way.
No, I mean that a major part of LW culture appears to be an irrational terror of believing things that aren’t “true”, no matter how useful it may be to believe them, either for practical purposes or as a steppingstone to finding something better. (Ala deBono’s notion of “proto-truth”—i.e., a truth you accept as provisional, rather than absolute.)
(DeBono’s notion of lateral thinking, by the way, is another great example of how, to find something more right, you may start by doing something that’s knowingly more wrong. His “provocative operator” (later renamed “green hat thinking”) is deliberately stating an idea that may be quite thoroughly insane, as a way of backing up and coming at it from a different angle.)
Irrational?
If you refuse to accept false beliefs that present themselves as useful, and refuse to tolerate any knowing self-deception in yourself, and you pursue this path as far as you can push it, and you have the intelligence and background knowledge to push it far enough, then uncompromising truth-seeking pays a dividend.
If you decide that some false beliefs are useful, you don’t get to take even the first steps, and you never find out what would have happened if you had pursued truth without compromise.
Perhaps you find that a false belief on this subject is more convenient, though...?
(I need to write up a canonical article on “No, we are not interested in convenient self-deceptions that promise short-term round-one instrumental benefits, we are interested in discovering the dividend of pushing epistemic truthseeking as far as we can take it”, since it’s a cached thought deep wisdom Dark Side Epistemology thingy that a lot of newcomers seem to regurgitate.)
For a while I tutored middle school students in algebra. Very frequently, I heard things like this from my students:
“I’m terrible at math.”
“I hate math class.”
“I’m just dumb.”
That attitude had to go. All of my students successfully learned algebra; not one of them learned algebra before she came to believe herself good at math. One strategy I used to convince them otherwise was giving out easy homework assignments—very small inferential gaps, no “trick questions”.
Now, the “I’m terrible at math” attitude was, in some sense, correct. You could look at their grades and their standardized test scores and see that they were in the lowest quartile of their class. But when my students started seeing A’s on their homework papers—when they started to believe that maybe they were good at math, after all—the difference in their confidence and effort was night and day. It was the false belief that enabled them to “take the first steps.”
I think this phenomenon illustrates a very widespread misunderstanding of what math is and how ones becomes good at it. Consider the following two anecdotes:
1) Sammy walks into advanced Greek class on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at Greek” and “just dumb”.
2) Sammy walks into advanced algebra on the first day of school, eager and ready to learn. He is crushed when, about 15 minutes after the class begins, he realizes that he has no idea what the teacher is talking about. Despairing, he concludes that he is “terrible at math” and “just dumb”.
Anecdote 1) just seems ridiculous. Of course if you walk into a language class that’s out of your depth, you’re going to be lost, everyone knows that. Every normal person can learn every natural language; there’s no such thing as someone who’s intrinsically “terrible at Greek”. The solution is just to swallow your pride and go back to an earlier class. But it seems like anecdote 2) is not only plausible but probably happens rather often. There is some irrational belief that skill at mathematics is some kind of unrefinable Gift: some people can do it and others just can’t. This idea seems absurd to me: there is no “math gene”; there are no other examples of skills that some people can get and others not.
It’s actually anecdote 1 that seems plausible to me and anecdote 2 that does not.
I happen to have been a language teacher, teaching adult hobbyists. It seemed to me that lots of my students had very unrealistic ideas about how easy it would be for them to learn a foreign language. They really did come to class expecting one thing and 15 minutes later finding out quite another thing. They typically brushed off their past bad experience with learning, say, Spanish back in school, on the theory that they’d never really been motivated to learn Spanish but they were really truly motivated to learn language X which I was teaching. Then they realized that learning language X involved a lot of the same boring grammar talk and memorization which they’d found so hard/boring when learning Spanish. (Of course it’s also possible that my classes just sucked.)
By contrast, no-one walks into advanced algebra classes having no idea what math is about. People who think they’re terrible at math usually infer this from having spent 10 years in a school system where they consistently had trouble with math assignments, performed poorly on math tests and had trouble understanding what math teachers were talking about. Most people who think they’re bad at math probably are actually bad at math. Sure, in some other universe they might have become good at math if some early stimulus had swung another way—maybe another teaching style would have helped, or a role model or whatever. But it also seems very reasonable to think that some people are born with more aptitude for math than others. General intelligence certainly has a large heritable component and I’m sure that holds for the special case of mathematical aptitude.
I spent many years operating under the assumption that everyone was about as smart and constructing elaborate explanations for why the reality I was confronted with seemed to be in so much conflict with that theory. Sure, if you add enough epicycles you can do it—but there’s a far simpler theory which has more explanatory power: Some people are “just dumb”. I personally find that a liberating theory to operate under. A lot of my “aha moments” seem to involve either the realization that “yes, people really are that stupid” or the realization that “yes, I really am that stupid”.
Unlike most other subjects, math is cumulative: students are taught one technique, they practice it for a while, and then they’re taught a second technique that builds on the previous. So there are two skills required:
The discipline to study and practice a technique until you understand it and can apply it easily. The ability to close the inferential gap between one technique and the next.
The second is the source of trouble. I can (and have) sat in on a single day’s instruction of a language class and learned something about that language. But if a student misses just one jump in math class, the rest of the year will be incomprehensible. No wonder people become convinced they’re “terrible at math” after an experience like that!
How is that unlike other subjects? Seems pretty universal.
An example of dark arts used for a good cause. The problem is that the children weren’t strong enough to understand the concept of being potentially better at math, of it being true that enthusiasm will improve their results.
They can’t feel the truth of the complicated fact of [improving in the future if they work towards it], and so you deceive them into thinking that they are [good already], a simpler alternative.
Vladimir, the problem has nothing to do with strength—some of these students did very well in other classes. Nor is it about effort—some students had already given up and weren’t bothering, others were trying futilely for hours a night. Even closing the initial inferential gap that caused them to fall behind (see my reply to Daniel_Burfoot above) didn’t solve the problem.
The problem was simply that they believed “math” was impossible for them. The best way to get rid of that belief—maybe the only effective way—was to give them the experience of succeeding at math. A pep talk or verbal explanation of their problems wouldn’t suffice.
If your definition of “the dark arts” is so general that it includes giving an easy homework assignment, especially when it’s the best solution to a problem, I think you’ve diluted the term beyond usefulness.
Ah, and where’s your peer-reviewed scientific evidence for that, or is it merely an article of faith on your part?
I’m not clear if you’re being sarcastic here, or just unable to see that this is precisely the same as my argument for trying out things you know are “wrong”.
Meanwhile, I think that you’re also still assuming that “believe” and “think true” are the same thing. I can make use of information from a book whose author believes in channeled spirits, without having to believe that channeled spirits actually exist.
In the instrumental sense, belief is merely acting as if something is true—which is not the same thing as thinking it’s actually true.
The thing that I think people here get wrong is simply that they assume that if they know something is not actually true, this means it’s permissible to discount any empirical evidence that the procedure associated with the untrue belief is actually useful. (Hypnosis is useful, for example, despite the fact that Anton Mesmer thought it had something to do with magnetism.)
Intermediate level: Rational evidence. I’ve learned the hard way that uncompromising epistemic perfectionism is not so much a grand triumph of virtue as, rather, the bare minimum required to not instantly completely epically fail when thinking using a human brain. You think you have error margin? You, a Homo sapiens? I wish I lived in the world you think you live in.
Me too. Which is why I find it astounding that you appear to be arguing against testing things.
The difference in my “bare minimum” versus yours is that I’ve learned not to consider mental techniques as being tested unless I have personally tested them using a “shut up and do the impossible” attitude. A statistically-validated study is too LOW of a bar for me, since I have no way to find out what statistic I will represent until I try the thing for myself.
If a human should be able to reinvent all of physical science by themselves, they should be able to do the same with the mental sciences. In other words, they should be able to test themselves, in a way that allows them to detect their own biases… particularly the biases that lead them to avoid testing things in the first place.
Okay… first, “shut up and do the impossible” may sound like it has a nice ring to you, but there’s something specific I mean by it—a specific place in the hierarchy of enthusiasm, tsuyoku naritai, isshokenmei, make an extraordinary effort, and shut up and do the impossible. You’re talking enthusiasm or tsuyoku naritai. “Shut up and do the impossible” is for “reduce qualia to atoms” or “build a Friendly AI based on rigorous decision theory before anyone manages to throw the first non-rigorous one together”. It is not for testing P. J. Eby’s theories of willpower. That would come under isshokenmei at the highest and sounds more like ordinary enthusiasm to me.
Second, there are, literally, more than ten million people giving advice about akrasia on the Internet. I have no reason to pay attention to your advice in particular at its present level of rigor; if I’m interested in making another try at these things, I’ll go looking at such papers as have been written in the field. You, I’m sure, have lots of clients and these clients are selected to be enthusiastic about you; keeping a sense of perspective in the face of that large selection effect would be an advanced rationalist sort of discipline wherein knowledge of an abstract statistical fact overcame large social sensory inputs, and you arrived very late in the OBLW sequence and haven’t caught up on your reading. I can understand why you don’t understand why people are paying little attention to you here, when all the feedback on your blog suggests that you are a tremendously intelligent person whose techniques work great. But to me it just sounds like standard self-help with no deeper understanding. “Just try my things!” you say, but there are a thousand others to whom I would rather allocate my effort than you. You are not the only person in the universe ever to write about productivity, and I have other people to whom I would turn for advice well before you, if I was going to make another effort.
It is your failure to understand why the achievements of others are important—why a science paper reporting the result of one experiment on willpower, has higher priority for examination by me than you and all your brilliant ideas and all your enthusiasm about them and all the anecdotal evidence about how it worked for your clients, that is your failure to understand the different standards this community lives by—and your failure to understand why science works, and why it is not just pointless formality-masturbation but necessary. Yes, there’s a lot of statistical masturbation out there. But conducting a controlled experiment and quantifying the result, instead of just going by anecdotal evidence about what worked for who, really is necessary. This is not generally appreciated by human beings and appreciating that fact, that it is counterintuitively necessary to do science, that it is not obvious but it really is necessary, is one of the entrance passes to the secret siblinghood of rationalists. This is perhaps something I should write about in more detail, because it’s one of those things so basic that I tend to take it for granted instead of writing about it.
As for your idea that others’ attention to pay attention to you in particular indicates a willpower failure on their part… that’s what we call “egocentric biases in availability”, namely, you think you are a much larger part of others’ mental universe than in fact you are. So much credibility as to try your suggestion instead of a million other suggestions is something that has to be earned. You haven’t earned it, only berated people for not listening to you. There are communities where that works, like self-help, where people are used to being berated, but in the vaster outside universe it will get you nowhere. You have to see the universe as others see it in order to get them to listen to you, and this involves understanding that they do not see you the way you see yourself.. To me you are simply one voice among millions.
Necessary for determining true theories, yes. Necessary for one individual to improve their own condition, no. If a mechanic uses the controlled experiment in place of his or her own observation and testing, that is a major fail.
I’ve been saying to try something. Anything. Just test something. Yes, I’ve suggested some ways for testing things, and some things to test. But most of them are not MY things, as I’ve said over and over and over.
At this point I’ve pretty much come to the conclusion that it’s impossible for me to discuss anything related to this topic on LW without this pervasive frame that I am trying to convince people to “try my things”… when in fact I’ve bent over backwards to point as much as possible to other people’s things. Believe it or not, I didn’t come here to promote my work or business.
I don’t care if you test my things. They’re not “my” things anyway. I’m annoyed that you think I don’t understand science, because it shows you’re rounding to the nearest cliche.
I actually advocate using a much higher standard of empirical testing of change techniques than is normally used in measuring psychological processes: observation of somatic markers (see Wikipedia re: the “somatic marker hypothesis”, if you haven’t previously).
Unlike self-reporting via questionnaire, many somatic markers can be treated as objective measures of results, because they are externally visible (facial expressions, posture change, etc.) and thus can be observed and measured by third parties. We can all agree whether someone flinches or grimaces or hangs their head in response to a statement—we are not dependent on the person themselves to tell us their internal reaction, nor do we have to sort through their conscious attempts to make their initial reaction look better.
True, I do not have a quantified scale for these markers, but it is nonetheless quantifiable—and it’s a direct outgrowth of a promising current neuroscience hypothesis. We can certainly observe whether a response is repeatable, and whether it is changed by any given intervention.
If someone wanted to turn that into controlled science, they’d have a lot of work ahead of them, but it could be done, and it would be a good idea. The catch, of course, is that you’d need to validate a somatic marker scale against some other, more subjective scale that’s already accepted, possibly in the context of some therapy that’s also relatively-validated. It seems to me that there are some chicken-and-egg problems there, but nothing that can’t be done in principle.
When I advocate that people try things, I mean that they should employ more-objective means of measurement—and on far-shorter timescales—than are traditionally used in the self-help field.
When I test some newfangled self-help modality (e.g. EFT, Sedona, etc.) it usually doesn’t take more than 30 minutes after learning the technique to know if it’s any good or not, because I have a way of measuring it that doesn’t depend on me doing any guessing. Either I still flinch or I don’t. Either I get a sinking feeling in my gut or I don’t. I know right then, in less time than it would take to list all the holes in their crazy pseudoscience theories about how the technique is supposed to work. (EFT, for example, works for certain things but its theory is on a par with Anton Mesmer’s theory of animal magnetism.)
I don’t know how you can get any more objective than that, at the level of individual testing. So, if there is anything that I’ve consistently advocated here, is that it’s possible to test self-help techniques by way of empirical observation of somatic marker responses both “before” and “after”. But even this is not “my” idea.
The somatic marker hypothesis is cutting-edge neuroscience—it still has a long way to go to reach the status of accepted theory. That makes using it as a testing method a bit more bleeding edge.
But for individual use, it has the advantage of being eminently testable.
Regarding the rest of your comment, I don’t see how I can respond, since as far as I can tell, you’re attacking things I never said… and if I had said them, I would agree with your impeccable critique of them. But since I didn’t say them… I don’t see what else I can possibly say.
Maybe I should wait for the canonical article, but is your argument that false beliefs are not part of a first-best approach to rationality, even though they might be part of a second-best approach? Or is it something stronger than that?
I, for one, am interested whether there are convenient self-deceptions that promise instrumental benefits, short-term or otherwise. If nothing else, this will help me adequately assess the potential costs of rationality, rather than taking its benefits as a matter of faith.
Believing things that aren’t true can be instrumentally rational for humans—because their belief systems are “leaky”—lying convincingly is difficult—and thus beliefs can come to do double duty by serving signalling purposes.
Yes, this is indeed the sort of argument that I’m not at all interested in, and naming this site “Less Wrong” instead of “More Wrong” reflects this. I’m going to find where the truth takes me; let me know how that lies thing works out—though I reserve the right not to believe you, of course.
Hypothetical (and I may expand on this in another post):
You’ve been shot. Fortunately, there’s a well-equipped doctor on hand who can remove the bullet and stitch you up. Unfortunately, he’s got everything he needs except any kind of pain killer. The only effect of the painkiller is going to be on your (subjective) experience of pain.
He can say: A. Look, I don’t have painkiller, but I’m going to have to operate anyhow.
B. He can take some opaque, saline (or otherwise totally inert) IV, tell you it’s morphine, and administer it to you.
Which do you prefer he does? Knowing what I know about the placebo effect, I’d have to admit I’d rather be deceived. Is this unwise? Why?
Admittedly, I haven’t attained a false conclusion via my epistemology. It’s probably wise to generally trust doctors when they tell you what they’re administering. So it seems possible to want to have false belief, even while wanting to maintain efficient epistemology. This might not generalize to Pjeby’s various theories, but it seems that we can think of at least one case where we would desire having a false belief. Admittedly, this might not be a decision we could make, i.e. “Lie to me about what’s in that IV!” might not help. (Though there is some evidence of placebos working even when people were made fully aware they were placebos.)
On the other hand, I’m not sure I can think of an example of where we desire to have a belief that we know to be false, which may be the real issue.
The doctor should say “This is the best painkiller I have” and administer it. If the patient confronts the question, it’s already too late.
Are you implying that the doctor should act to trigger a placebo effect, while still making a true statement? Because in the least convenient version of the dilemma, you would have to choose one or the other.
Erased my previous comment. It missed the real point.
If you think the doctor should say, “This is the best painkiller I have,” that suggests you want to believe you are getting a potent painkiller of some kind. You want to believe that it is a potent painkiller, which is false, as opposed to it is the most potent of the zero painkillers he has, which is true. The fact that the doctor is not technically lying does not change the fact you want to believe something that is false.
If the IV contains a saline solution, the Way may want me to believe the IV contains a saline solution, but I sure as Hell want to think it contains a potent painkiller.
(Yes, I realize the irony in using the expression “sure as Hell.”)
“Pain will go away” is a true belief for this situation.
The doctor can do a heck of a lot better than that, even without lying. Ericksonian hypnosis, for example, involves a lot of artfully-vague statements like, “you may notice some sensation happening now”, and amplifying them to lead a person to believe more specific suggestions (such as pain-relief suggestions) that follow. A lot of it can also be done covertly, such that the patient is never consciously aware that a hypnotic procedure is under way.
(Of course, statistics say that relatively few people are able to undergo major surgery with hypnoanesthesia. But if that’s the only painkiller you have, it’d be silly not to use it.)
Omega asks you to silently guess the color of a bead in a jar. Omega then inflicts some amount of pain on you. If Omega believes that you believe the bead to be red (it is in fact blue) then he will administer subjectively less pain.
The win here is for omega to believe that you believe the bead is blue. In the surgery situation, we only have to trick part of our brain.
I suspect that with practice, this would be easier if one were actually attempting it, rather than concluding mid-surgery that morphine does not work on you.
I was trying to explain why it can be instrumentally rational for humans to believe things that aren’t true.
For example, if it is the middle ages and you are surrounded by righteous Christian types, it is probably better (in terms of avoiding being burned at the stake) to believe in god than to be an atheist and pretend to be a believer.
Lying is often dangerous for humans—because the other humans have built-in lie detectors.
I was advocating truthfully expressing your own false beliefs under those circumstances. I was not advocating believing the truth (as an epistemic rationist and an atheiest) and then lying about it to save your skin—or indeed, freely expressing your opinions—thereby getting ostracised, excommuniacted—or whatever.
Believing the truth is not my main goal—nor is it a particularly biologically realistic goal. Organisms that prioritise believing the truth over survival and reproduction can be expected to do poorly. So: it is reasonable to expect that most organisms you actually observe do not value truth that highly.
What about organisms that claim to be pure truth seekers? My first reaction is that they are probably deceiving me about their motives—probably for signalling purposes. Not necessarily lying—they might actually believe themselves to be truth-seekers—but rather acting inconsistently with their stated motives. Another possibility is that their brains have been infected with deleterious memes—rather like what happens to priests.
In the first case, they are behaving hypocritically—and I would prefer it if they stopped deceiving me about their motives. In the second case, I am inclined to offer therapy—though there’s a fair chance that this will be rejected.
Having been in this circumstance in the past—i.e., for most of my life believing myself to be such a truth-seeker—I have a simpler explanation of how it works. Signaling is not a conscious part of it, even though the mechanism in question is clearly evolved for signaling purposes.
It’s what Robert Fritz calls in his books, “an ideal-belief-reality conflict”—a situation where one creates an ideal that is the opposite of a fear. If you fear lies, or being wrong, then you create ideals of Truth and Right, and you promote these ideas to others as well as striving to live up to them yourself.
Of course, you can have such a conflict about tons of things, but pretty much, anybody who has an Ideal In Capital Letters—something that they defend with zeal—you know this mechanism is at work.
The key distinction between merely thinking that truth or right or fairness are just Really Good Ideas and being an actual zealot, though, is how a person responds to their absence or the threat of their absence. The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
The brain machinery that IBRCs run on is pretty clearly the mechanism that evolved to motivate signaling of social norms, and people can have many IBRCs. I’ve squashed dozens in myself, including several relating to truth and rightness and fairness and such. They’re also a major driving force in chronic procrastination, at least in my clients.
They’re not consciously deceving anyone; they’re sincere in their belief, despite the fact that this sincerity is a deception mechanism.
Sadly, the IBRC is the primary mechanism by which we irrationally separate our beliefs from their application. The zealotry with which we profess and support our “Ideal” is the excuse for our failure to actually act. For example, if your ideal is Truth, then you can always ask for more truth to justify something you don’t want to do, but insist that anything you do want to do is supporting the Truth. (Not that any of this takes place consciously, mind you.)
So someone who’s not actively investigating their own ideals for IBRCs is not really serious about being a rationalist. They’re just indulging an ideal of rationalism and signaling their superiority and in-group status, whether they realize it or not. (I sure never realized it all the years that I was doing it.)
I agree that people can take “really good ideas” too far, but I’m not satisfied by the distinction you draw.
|The person who has aversive feelings for their absence, or expresses social disapproval out of personal emotion, is operating under the influence of an IBRC. Rational thought does not link the absence of a good to aversive emotion, nor equate the absence of a good to an active evil.
ISTM that the good can be arbitrarily defined as the absence of a bad, and vice versa.
Only if you’re speaking in an abstract way that’s divorced from the physical reality of the hardware you run on. Physically, neurologically, and chemically, we have different systems for acquisition and aversion, and good/bad are only opposites at the extremes. At the hardware level, labeling something “bad” is different from labeling it “not very good”, in terms of both experiential and behavioral consequences. (By which I mean that those two things also produce different biases in your thinking.)
Some people have a hard time grokking this, because intellectually, it’s easier to think of a 1-dimensional good/bad spectrum. My personal hypothesis is that we evolved a simple mechanism for predicting others’ attitudes using the 1-dimensional model, as a 1-dimensional predictive model is more than good enough for figuring out that predators want to attack you and prey wants to escape you.
However, if you want to understand your own behavior, or really accurately model the behavior of others (or even just be aware of the truth of how the platform works!), then you’ve got to abandon the built-in 1D model and move up to at least a 2D one, where the goodness and badness of things can vary semi-independently.
Thanks for sharing.
It all makes me think of the beauty queens—and their wishes for world peace.
One of the concepts I’ve been playing with is the idea that the advantage of knowing our innate biases is not so much in overcoming them but in identifying and circumventing them.
Your common scenarios regarding risk assessment and perceptions of loss vs. gain generally assume a basis in evolutionary psychology. If these are in fact built into our brains it strikes me trying to overcome them directly is a skill we can never fully master and trying to do so brings tempts akrasia.
Consider a scenario where you can spend $1000 to have a 50% shot of winning $2500. It’s a definite win but turning over the $1000 is tough because of how we weigh loss (if I recall loss is weighted twice as greatly as gain). On the other hand you can just tell yourself (rationalize?) that when you hand over the $1000 that you’re getting back $1250 for sure. It’s an incorrect belief but one I’d probably use as I wouldn’t have to expend willpower overcoming your faulty loss prevention circuits.
Which approach would you use?
Not true; $2500 is not necessarily 2.5 times as useful as $1000.
http://en.wikipedia.org/wiki/Marginal_utility#Diminishing_marginal_utility
People overcome innate but undesired drives all the time, like committing violence out of anger. Your former approach actually doesn’t sound very hard to me, although it might be hard for someone unusually loss-averse. Also, the latter approach sounds like it might not be self-deception in every sense, since there’s no single thing in the mind that is a “belief” (q.v. Instrumental vs. Epistemic – A Bardic Perspective); it seems like this point is being consistently ignored throughout this discussion.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
You might well be right that there are loads of “useful falsehoods”, you might even know them personally, but you’re wrong to claim fear of knowingly internalizing such falsehoods is irrational just because you know them to work. Simply taking your word for it would in fact be less rational.
This sounds like a good creativity hack, but I don’t see what it has to do with accepting false beliefs.
It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
[boggle] Why do you think this has anything to do with me? Placebos are useful falsehoods, and there’s tons of research on them. Go look at Dweck and Seligman on the growth mindset and optimism, respectively.
Hell, go study pickup or hypnosis or even acting, for crying out loud. Direct marketing, even. ANY practical art that involves influencing the beliefs of one’s self or others, that’s tied to reasonably timely feedback.
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
And the truth is not a substitute for that, however much blind faith you put into it.
The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning. In all other matters, knowing what to do is immensely more important than knowing why… and the why is only useful if it helps you to believe in something enough to make you actually DO something.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.
Why should a belief be true just because it’s useful? Or are you saying people are claiming a belief’s usefulness is not true despite the evidence that it’s useful?
Neither. I’m saying that a popular attitude of LW culture is to prefer not to “believe” the thing it’s useful to believe, if there is any evidence the belief is not actually true, or often even if there is simply no peer-reviewed evidence explicitly associated with said belief.
For example, self-fulfilling prophecies and placebo effects. Some people here react with horror to the idea of believing anything they can’t statistically validate… some even if the belief has a high probability of making itself come true in the future.
My immediate reaction to this paragraph is skepticism that I can believe something, if I don’t believe the evidence weighs in its favor; other people might be able to choose what they believe, but I’ve internalized proper epistemology well enough that it’s beyond me. On reflection, though, while I think there is some truth to this, it’s also a cached oversimplification that derives its strength from being part of my identity as a rationalist.
Related to: Belief in Self-Deception, Litany of Tarski.
Well, while a self-fulfilling belief might help you accomplish one goal better, it may make you worse off accomplishing another (assuming that belief is not true). It may be the case that some false self-fulfilling beliefs will make you better off throughout your life, but that’s hard to prove.
Thank you for eloquently demonstrating precisely what I’m talking about.
Results are neither right nor wrong—they just are.
To expand a little on what timtyler said, I think you’re mixing up beliefs and actions.
Doing nothing doesn’t make your beliefs less wrong, and placing bets doesn’t make your beliefs more right (or wrong).
Wanting to be ‘less wrong’ doesn’t mean you should be conservative in your actions.
I’ve also had mixed feelings about the concept of being “less wrong.” Anyone else?
Of course, it is harder to identify and articulate what is wrong than what is right: we know many ways of thinking that lead away from truth, but it harder to know when ways of thinking lead toward the truth. So the phrase “less wrong” might merely be an acknowledgment of fallibilism. All our ideas are riddled with mistakes, but it’s possible to make less mistakes or less egregious mistakes.
Yet “less wrong” and “overcoming bias” sound kind of like “playing to not lose,” rather than “playing to win.” There is much more material on these projects about how to avoid cognitive and epistemological errors, rather than about how to achieve cognitive and epistemological successes. Eliezer’s excellent post on underconfidence might help us protect an epistemological success once we somehow find one, and protect it even from our own great knowledge of biases, yet the debiasing program of LessWrong and Overcoming Bias is not optimal for showing us how to achieve such successes in the first place.
The idea might be that if we run as fast as we can away from falsehood, and look over our shoulder often enough, we will eventually run into the truth. Yet without any basis for moving towards the truth, we will probably just run into even more falsehood, because there are exponentially more possible crazy thoughts than sane thoughts. Process of elimination is really only good for solving certain types of problems, where the right answer is among our options and the number of false options to eliminate is finite and manageable.
If we are in search of a Holy Grail, we need a better plan than being able to identify all the things that are not the Holy Grail. Knowing that an African swallow is not a Holy Grail will certainly help us not not find the true Holy Grail because we erroneously mistake a bird for it, but it tells us absolutely nothing about where to actually look for the Holy Grail.
The ultimate way to be “less wrong” is radical skepticism. As a fallibilist, I am fully aware that we may never know when or if we are finding the truth, but I do think we can use heuristic to move towards it, rather than merely trying to move away from falsehood and hoping we bump into the truth backwards. That’s why I’ve been writing about heuristic here and here, and why I am glad to see Alicorn writing about heuristics to achieve procedural knowledge.
For certain real-world projects that shall-not-be-named to succeed, we will need to have some great cognitive and epistemological successes, not merely avoid failures.
And if you play the lottery long enough, you’ll eventually win. When your goal is to find something, approach usually works better than avoidance. This is especially true for learning—I remember reading a book where a seminar presenter described an experiment he did in his seminars, of sending a volunteer out of the room while the group picked an object in the room.
After the volunteer returned, their job was to find the object and a second volunteer would either ring a bell when they got closer or further away. Most of the time, a volunteer receiving only negative feedback gives up in disgust after several minutes of frustration, while the people receiving positive feedback usually identify the right object in a fraction of the time.
In effect, learning what something is NOT only negligibly decreases the search space, despite it still being “less wrong”.
(Btw, I suspect you were downvoted because it’s hard to tell exactly what position you’re putting forth—some segments, like the one I quoted, seem to be in favor of seeking less-wrongness, and others seem to go the other way. I’m also not clear how you get from the other points to “the ultimate way to be less wrong is radical skepticism”, unless you mean lesswrong.com-style less wrongness, rather than more-rightness. So, the overall effect is more than a little confusing to me, though I personally didn’t downvote you for it.)
Thanks, pjeby, I can see how it might be confusing what I am advocating. I’ve edited the sentence you quote to show that it is a view I am arguing against, and which seems implicit in an approach focused on debiasing.
Yes, this is exactly the point I was making.
Rather than trying to explain my previous post, I think I’ll try to summarize my view from scratch.
The project of “less wrong” seem to be more about how to avoid cognitive and epistemological errors, than about how to achieve cognitive and epistemological successes.
Now, in a sense, both an error and a success are “wrong,” because even what seems like a success is unlikely to be completely true. Take, for instance, the success of Newton’s physics, even though it was later corrected by Einstein’s physics.
Yet I think that even though Newton’s physics is “less wrong” than classical mechanics, I think this is a trivial sense which might mislead us. Cognitively focusing on being “less wrong” without sufficiently developed criteria for how we should formulate or recognize reasonable beliefs will lead to underconfidence, stifled creativity, missed opportunities, and eventually radical skepticism as a reductio ad absurdam. Darwin figured out his theory of evolution by studying nature, not (merely) by studying the biases of creationists or other biologists.
Being “less wrong” is a trivially correct description of what occurs in rationality, but I argue that focusing on being “less wrong” is not a complete way to actually practice rationality from the inside, at least, not a rationality that hopes to discover any novel or important things.
Of course, nobody in Overcoming Bias or LessWrong actually thinks that debiasing is sufficient for rationality. Nevertheless, for some reason or another, there is an imbalance of material focusing on avoiding failure modes, and less on seeking success modes.
At least one person seems to think that this post is in error, and I would very much like to hear what might be wrong with it.
Perhaps there are intuitive notions of “less wrong” that are different from “more right”, but in a technical sense, they seem to be the same:
Accounting for the uncertainty in your own mind only gets you so far, to a certain minimum of wrongness. To do better, to be less wrong, you have to actually be right about the rest of the universe outside your mind.
True but irrelevant; this is psychology, not probability theory. Intuitively, to a first approximation, beliefs are either affirmed or not, and there’s a difference between affirming fewer false beliefs and more true ones.
The fact that psychology can explain how the phrase “less wrong” can be misunderstood does not mean that the misunderstanding is the correct way to interpret that phrase when used by an online community that uses psychology, as well as probability theory, to inform the development of rationality. It does not make sense to interpret the title of our site with the very naivety that we seek to overcome.
That’s what I’ve been saying, actually. Except that the naivety in question is the belief that brains do probability or utility, when it’s well established that humans can have both utility and disutility, that they’re not the same thing, and that human behavior about them is different. You know, all that loss/win framing stuff?
It’s not rational to expect human beings to treat “less wrong” as meaning the same thing (in behavioral terms) as “more right”. Avoiding wrongness has different emotional affect and different prioritization of behavior and thought than approaching rightness. Think “avoiding a predator” versus “hunting for food”.
The idea that we can simultaneously have approach and avoidance behaviors and they’re differently-motivating is backed by a (yes, peer-reviewed) concept called affective asynchrony. Strong negative or strong positive emotions can switch off the other system, but for the most part, they operate independently. And mistake-avoidance motivation reduces creativity, independence, risk-taking, etc.
Heck, I’d be willing to bet some actual cash money that a controlled experiment would show significant behavioral differences between people primed with the terms “less wrong” and “more right”, no matter how “rational” they rate themselves to be.
You bet: there’s the one where you can be “less wrong” by never believing anything, because there are more possible false beliefs than true ones. You have now achieved perfect less-wrongness, at the cost of never having any more-rightness.
You missed the point. The intuitive meaning of “less wrong” you describe is a caricature of the ideal of this community.
If by “never believing anything”, you mean “don’t assign any probability to any event”, well then we give a person who does that a score of negative infinity, as wrong as it gets.
If you mean they evenly distribute the probability mass amongst all possibilities, that is what we consider maximum entropy, a standard so low that anything worse might be considered “reversed intelligence”. As Eliezer explained in the source I quoted earlier, we want to do better than freely confessed ignorance. We really do want actual intelligence, not just the absence of its opposite.
It’s not a caricature of the actual behavior of many of its members.… which notably does not live up to that ideal.
No, I mean choosing to never consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true… especially with respect to the things we would prefer to believe are true about ourselves and others.
A segment of LW culture applauds the detection and management of superficial biases while being ludicrously blind to the massive bias of the very framework it operates in: the one where truth and reason must prevail at all costs, and where the idea of believing something false—even for a moment, even in a higher cause, is unthinkable.
Is that a caricature of the Bayesian ideal? No kidding. But I’m not the one who’s drawing it.
What I’m specifically referring to here is the brigade whose favorite argument is that something or other isn’t yet proven “true”, and that they should therefore not try it… especially if they spend more time writing about why they shouldn’t try something, than it would take them to try it.
Heck, not just why they shouldn’t try something, but why noone should ever try anything that isn’t proven. Why, thinking a new thought might be dangerous!
And yes, someone actually argued that, in the context of a thread talking about purely-mental experiments that basically amounted to thinking. (Sure, they left themselves weasel room to argue that they weren’t saying thoughts were dangerous, and yet they still used it as a fully general argument, applied to the specific case of experimenting with a thought process.)
What’s that saying about how, if given a choice between changing their mind and trying to prove they don’t need to, most people get busy on the proof?
So, “never believing anything” means having unwavering certainty?
Without knowing what “brigade” or techniques you are referring to, I have to wonder if the people involved were not looking for some absolute proof, but for evidence that a given technique is more likely to be helpful than harmful. Or maybe some of them had tried a bunch of techniques presented with similar claims, with no success, and decided they want to have a good reason for trying out a particular technique instead of the many similar techniques one might propose. They might even think that, if they knew the reasons that someone was proposing them, they might understand the technique better and use it more effectively, or even see that the reasoning was not quite right, but if they fix it, it suggests that a similar technique might work. They might not actually have the particular problem the technique is supposed to solve, and are seeking evidence about if it works for people who do have the problem.
Good point, but a priori I wouldn’t expect a self-help technique to be harmful in a way that’s either hard to notice or hard to reverse. Can you think of some that are, especially ones where it would be hard to predict the harm beforehand from evidence specific to the technique?
Not wanting to invest time and effort can be a good reason (then again, if you have time to argue at length in comment threads...), but the existence of similar techniques shouldn’t matter. A greater number of options has been shown to lead to less willingness to choose anything (e.g.); beware. (FWIW, I suspect this has to do with a general heuristic to do the most defensible thing instead of the best thing.)
Strongly agreed. Generally, though, I agree with pjeby’s conclusion (tentatively, but only because so many others here disagree).
So, you want an example a technique that I can argue is harmful, but it is difficult to predict that harm? You want a known unknown unknown? I don’t think I can provide that. But if you look at my assessment of the the keeping cookies available trick, I explain how there is some possibility of harm and what kinds of evidence one might use to evaluate if the risk is worth the potential benifet.
Suppose you have 10 tricks that you might try to solve a particular problem, and that it might take a day to try one trick and evaluate if it worked for you. Would it be a good idea to spend some time to figure out if one of the tricks stands out from the other as more likely to work, either generally or for you in particular? Being able to systematically try the trick that is well supported and understood has some value. Or, if in discussing one aspect of the trick that you think would never work, and it turns out you were right, what you understood would not work, and the actually trick is something different, you have not just saved a lot of time, you have prevented yourself from losing the opportunity to try the real trick.
No, an example of a technique that is harmful, but whose harm would have been difficult for a reasonable person to predict in advance. The potential downside of the cookie trick is easy to notice and easy to reverse (well, I guess you can’t easily reverse gaining epsilon weight, but you can limit it to epsilon), so as a reason not to try it’s very weak.
I take my point back. If you can only try one thing, it makes sense to just act if there is only one option, but to demand a good reason before wasting your chance if there are multiple options. (Formally, this is because the opportunity cost of failure is greater in the latter case.) Realistically, “willpower to engage in psychological modification” seems like it would often be a limiting factor producing this effect; still, I would expect irrational choice avoidance to be a factor in many cases of people demanding a reason to favor one option.
My point is that this is argument is fully general. What is the evidence that empirical rationality is more likely to be helpful than harmful? If we look at the evidence presented here, where more evidence is presented more often that being too rational can hurt your happiness and effectiveness, why is this not treated as a reason to wait for more study of this whole “rationality” business to make sure it’s more helpful than harmful?
It’s really ironic that optimism is as much a mind-killer here as politics and religion. Hell, the fact that religion can be shown to have empirically positive effects on people’s lives is often viewed here as a depressing problem, rather than an opportunity to learn something about how brains work. The problem of understanding the god-shaped hole is something people talk a lot about, but very few people are actually doing anything about it.
Well, we have reliable agriculture, reasonably effective transportation infrastructure, flight, telecommunication, powerful computers with lots of useful software because lots of people worked hard to believe things that are true, and used those true beliefs to figure out how to accomplish their goals.
Asking for evidence is not a fully general counterargument. It distinguishes arguments that have evidence in their favor from those which don’t.
And keep in mind, the social process of science, which you seem to think holds you to too high of a standard, is not merely interested in your ideas being right, but that you have effectively communicated them to the extent that other people can verify that they are right. You might discover the greatest anti-akrasia trick ever, but if you can’t explain it so that other people can use it and have it work even if you are not there to guide the process, then you have only helped yourself and your clients, and talking about it here is not helping. Of course, you could take the opportunity to figure out how to explain it better, though it would require you to “consider the possibility that you might be utterly, horribly wrong in your certainty about what things are true”.
Two things I forgot in my other reply: first, testing on yourself is a higher standard than peer review, if your purpose is to find something that works for you.
Second, if this actually were about “my” ideas (and it isn’t), I’ve certainly effectively communicated many of them to the extent of verifiability, since many people have reported here and elsewhere about their experiments with them.
But very few of “my” ideas are new in any event—I have a few new approaches to presentation or learning, sure, maybe some new connections between fields (ev psych + priming + somatic markers + memory-prediction framework + memory reconsolidation, etc.), and a relatively-new emphasis on real-time, personal empirical testing. (I say relatively new because Bandler was advocating extreme testing of this sort 20+ years ago, but for some reason it never caught on in the field at large.)
And I’m not aware that any of these ideas is particularly controversial in the scientific community. Nobody ’s pushing for more individual empirical testing per se, but the “brief therapy” movement that resulted in things like CBT is certainly more focused that direction than before.
(The reason I stopped even bothering to write about any of that, though, is simply that I ended up in some sort of weird loop where people insist on references, and then ignore the ones I supply, even when they’re online papers or Wikipedia. Is it any wonder that I would then conclude they didn’t really want the references?)
Those are the products of rationalism. I’m asking about evidence that the practice of (extreme) rationalism produces positive effects in the lives of the people who practice it, not the benefits that other people get from having a minority of humans practice it.
It is if you also apply the status quo bias to choose which evidence to count.
I really wish people wouldn’t conflate the discussion of learning and attitude in general with the issue of specific techniques. There is plenty of evidence for how attitudes (of both student and teacher) affect learning, yet somehow the subject remains quite controversial here.
(Edited to say “extreme rationalism”, as suggested by Nick Tarleton.)
You should probably be asking about extreme rationality.
Evidence is demanded for communicating the change in preferred decision.
If I like eating cookies, and so choose to eat cookies, it takes at least a deliberative thought to change my mind. I may have all the data, but changing a decision requires considering it. I may realize that I’m getting overweight, and that most of my calories come from cookies, so I change my mind and start preferring the decision of not eating cookies.
If a guy on the forum says, to my disbelief, that stopping eating the cookies in my particular situation will actually make me even more overweight, I won’t be able to change my mind as a result of hearing his assertion. I consider what it’d take to change my mind, and present him with a constructive request: find a few good studies supporting your claims, and show them to me. That’s what it takes to change my mind, and I can think of no other obvious way for him to convince me to change this decision.
You mean status quo bias, like the argument against the Many-Worlds interpretation?
It’s funny that you mention this, because I actually know of an author that says something just similar enough to that idea that you could maybe confuse what she says as meaning you should eat the cookies.
Specifically, she posits a mechanism which causes some people to eat compulsively when they believe they will not have enough food in the future, regardless of whether they’re hungry now. She actually encourages these people to keep stores of indulgence foods available in all places at all times, in order to produce a feeling of security that negates their compulsion to eat now—in effect, they can literally procrastinate on overeating, because they could now do it “any time”. There’s no particular moment at which they need to eat up because they’re about to be out of reach of food.
I bring this up because, if you heard this theory, and then misinterpreted it as meaning you should eat the cookies, then it would be quite logical for you to be quite skeptical, since it doesn’t match your experience.
However, if you simply observed your past experience of overeating and found a correlation between times when you ate cookies and a pending separation from food (e.g. when being about to go into a long meeting), I would be very disappointed for your rationality if you then chose NOT to try bringing the cookies into the meeting with you, or hiding a stash in the bathroom that you could excuse yourself for a moment to get, or even just focusing on having some right there when you get out of the meeting.
And yes, this metaphor is saying that if you think you need studies to validate things that you can observe first in your own past experience, and then test in your present, then you’ve definitely misunderstood something I’ve said.
(Btw, in case anyone asks, the author is Dr. Martha Beck and the book I’m referring to above is called The Four-Day Win.)
I strongly suspect that this trick wouldn’t work on me—the problem is that I’ve taught my brain to deliberately keep a step ahead of this sort of self-deception. Even if I started out by eating a whole pack of cookies, the second pack, that I was just supposed to keep available and feel the availability of, but not eat, would not feel available. If it was truly genuinely available and it was okay to eat it, I would probably eat it. If not, I couldn’t convince myself it was available.
What I may try is telling myself a true statement when I’m tempted to eat, namely that I actually do have strong food security, and I may try what I interpret as your monoidealism trick, to fill my imagination with thoughts of eating later, to convince myself of this. That might help—if the basic underlying theory of eating to avoid famine is correct. Some of the Seth Roberts paradigm suggests that other parts of our metabolism have programmed us to eat more when food is easily available. We could expect evolution to be less irrational than the taxi driver who quits early on rainy days when there are lots of fares, and works harder and longer when work is harder to come by, in order to make the same minimum every day.
Another thought is that it may be a bad situation for your diet to ever allow yourself to be in food competition with someone else—to ever have two people, at least one of whom is trying to diet, eating from the same bag of snacks in a case where the bag is not immediately refilled on being consumed.
’Tis a pity that such theories will never be tested unless the diet-book industry and its victims/prey/readers, become something other than what it is now; even if I were to post, saying this trick work, it would only be one more anecdote among millions on the Internet.
IIRC, she only advocated this theory for people who were binging in response to anticipated hunger, and not as a general theory of weight loss. It’s only a tiny part of the book as a whole, which also discussed other emotional drivers for eating. Part of her process includes making a log of what you eat, at what time of day, along with what thoughts you were thinking and what emotional and physical responses you were having… along with a reason why the relevant thought might not be true.
I haven’t tried it myself—I actually didn’t buy the book for weight loss, but because I was intrigued by her hypothesis that it only takes four days to implement a habit (not 21 or 30 as traditional self-help authors claim), provided that the habit doesn’t represent any sort of threat to your existing order. For example, most people can easily learn a new route to work or school within four days of moving or changing jobs or schools.
That is, it’s only habits that conflict in some way with an existing way of doing things that are difficult to form, so her proposal is to use extremely small increments, like her own example of driving to the gym every morning for four days… but just sitting in the parking lot and not actually going in.… then going in and sitting on a bike but not exercising… etc. At each stage, four days of it is supposed to be enough to make what you’ve already been doing a non-threatening part of your routine.
I’ve used the approach to implement some small habits, but nothing major as yet. Seems promising so far.
It seems that keeping cookies constantly available so that one never feels they will be unavailable does not involve any sort of self deception. One can honestly tell oneself that they don’t have to eat the cookie now, it will still be there later.
But still, this trick might be harmful to some people. If someone instead will just eat cookies whenever they are available, without any regard to future availability, this will cause that person to eat a lot of cookies.
It might help to have some studies that say some percentage of people are the sort this technique helps, and some other percentage are the sort that are harmed, or better yet, identify some observable characteristics that predicts how a person would be affected. With this information, people can figure out of it makes sense for them to risk some time making their problem worse by having more cookies available for a time in order to maybe learn a technique that solves their problem. They might also be able to figure out if they should try some other trick they heard about first.
And if you cross the street, you might get hit by a car. This sort of reasoning goes on and on but never gets you anywhere. If you don’t want to do something, you can always find a reason.
Sure, that doesn’t mean that ANY reason is valid, or that EVERY objection is invalid. However, that too is another step in a chain of reasoning that never ends, and never results in taking action. Instead, you will simply wait and wait and wait for someone else to validate your taking action. So if you don’t take status quo bias into consideration, then you have no protection against your own confabulation, because the only way to break the confabulation deadlock is to actually DO something.
Even if you don’t know what the hell you’re doing and try things randomly, you’ll improve as long as there’s some empirical way to measure your results, and the costs of your inevitable mistakes and failures are acceptable. Hell, look at how far evolution got, doing basically that. An intelligent human being can certainly do better… but ONLY by doing something besides thinking.
After all, the platform human thinking runs on is not reliable. In principle, LWers and I agree on this. But in practice, LWers argue as if their brains were reliable reasoning machines, instead of arguing machines!
I learned the hard way that my brain’s confabulation—“reasoning”—is not reliable when it is not subjected to empirical testing. It works well for reducing evidence to pattern and playing with explanatory models, but it’s lousy at coming up with ideas in the first place, and even worse at noticing whether those ideas are any good for anything but sounding convincing.
One of my pet sayings is that “amateurs guess, professionals test”. But “test” in the instrumental world does not mean a review of statistics from double-blind experiments. It means actually testing the thing you are trying to build or fix, in as close to the actual use environment as practical. If my mechanic said statistics show it’s 27% likely that the problem with my car is in the spark plugs, but didn’t actually test them, I’d best get another mechanic!
The best that statistics can do for the mechanic is to mildly optimize what tests should be done first… but you could get almost as much optimization by testing in easiest-first order.
And where, pray tell is this information going to come from if nobody tries anything? If the Extreme Rationalist position is only to try things that have been validated on other people first, what happens when everybody is an extreme rationalist? Doesn’t sound very scalable to me.
The fact that one might get hit by a car when crossing the street does not prevent people from crossing the street; it causes them to actually look to see if cars are coming before crossing the street. Did you miss the part where I talked about how the evidence might convince someone the risk is acceptable in their case? Or how it might help them to compare it to another trick that is less risky or more likely to work for them that they could try first?
Getting volunteer subjects for a study is different than announcing a trick on the internet and expecting people to try it.
Where do you think the data is going to come from if people just try it on their own? How are you going to realize if you have suggested a trick that doesn’t work, or that only works for some people, if you accept all anecdotal success stories as confirming its effectiveness, but reject all reports of failure because people just make excuses?
I can only assume you’re implying that that’s what I do. But as I’ve already stated, when someone has performed a technique to my satisfaction, and it still doesn’t work, I have them try something else. I don’t just say, “oh well, tough luck, and it’s your fault”.
There are only a few possibilities regarding an explanation of why “different things work for different people”:
Some things only work on some people, and this is an unchanging trait attributable to the people themselves,
Some things only work on certain kinds of problems, and many problems superficially sound similar but are actually different in their mechanism of operation (so that technique A works on problem A1 but not A2, and the testers/experimenters have not yet discerned the difference between A1 and A2), and
Some people have an easier time of learning how to do some things than others, depending in part on how the thing is explained, and what prior beliefs, understandings, etc. they have. (So that even though a test of technique A is being performed, in practice one is testing an unknown set of variant techniques A1, A2,...)
On LW, #1 is a popular explanation, but I have seen much more evidence that makes sense for #2 and #3. (For example, not being able to apply a technique and then later learning it supports #3, and discovering a criterion that predicts which of two techniques will be more likely to work for a given problem supports #2.)
Of course, I cannot 100% rule out the possibility that #1 could be true, but it seems like pretty long odds to me. There are so many clear-cut cases of #2 and #3, that barring actual brain damage or defect, #1 seems like adding unnecessary entities to one’s model, without any theoretical or empirical justification whatsoever.
More than that, it sounds exactly like attribution error, and an instance of Dweck’s “fixed” mindset as well. In other words, we can expect belief in #1 to be associated with a mindset that is highly correlated with consistent difficulty and stress in the corresponding field.
That’s why I consider view #1 to be bad instrumental hygiene as well as not that likely to be true anyway. It’s a horrible negative self-prime to saddle yourself with.
Actually, it might make more sense to try to figure out why it works some times and not others, which can even happen in the same person… like, uh, me. If I’m careful about what I keep in the house, I wind up gorging on anything tasty almost as soon as I get it, and buying less healthy ‘treats’ more often (‘just this once’, repeatedly) when I go out. If I keep goodies at home, I’ll ignore them for a while, but then decide something along the lines of “it’d be a shame to let this go to waste” and eat them anyway.
There are different mental states involved in each of those situations, but I don’t know what triggers the switch from one to another.
I mean the argument being too weak to change one’s mind about a decision. It communicates the info, changes the level of certainty (a bit), but it doesn’t flip the switch.
I think this is an excellent point; I’m not sure it’s a valid criticism of this community.