It’s an illustration of the principle that to proceed from the known to the unknown, one must travel by way of the very-likely-wrong, including that which from your current perspective may appear “more wrong” than where you started from.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Why do you think this has anything to do with me?
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
However, for all that the teachers of practical arts are often deluded in that latter way, they at least have the comfort of systematized winning.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
To the extent that you find the teachings of these arts to be less than “true”, and yet are unable to replicate the results of their masters, it is as irrational to insist that only the true can ever be useful, as it would be to assert that the useful must therefore be true. [...] The pursuit of truth for its own sake is an irrational passion, unless knowing truth is your ONLY form of winning.
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
Really? All self-described teachers of practical arts have the comfort of systematized winning?
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I referred only to people who have some systematized winning.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it?
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it?
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
If your point was merely “some deluded people win”… then I’m not sure I get your point.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
“I first need a true belief T that U is useful”.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
My, you are confident in your theories of human motivation.
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.
You’ve really hedged your language here. Are we talking about beliefs, or “perspectives”? The two seem very different to me. Does anyone ever acquire a skill without trying new perspectives, unproven variations on existing “known-good” techniques? This is just exploration vs exploitation, which seems quite distinct from belief. I don’t change betting strategies just because I’m in the middle of an experiment.
Because it seems that you’ve had more experience with LW’ers rejecting your useful falsehoods than useful falsehoods in general, and I guessed this as the motive behind your original complaint. I could be mistaken. If I am, I’m curious as to which “terror” you’re referring to. It seems fairly widely accepted here that a certain amount of self-deception is useful in the pickup domain, for example.
Really? All self-described teachers of practical arts have the comfort of systematized winning? There are no snake-oil charlatans for whom things just “went well” and are now out to capitalize on it? How can we tell the difference?
I think the above exemplifies the mismatch between your philosophy and mine. Yes, it’s incorrect to claim that only true beliefs are useful. But the stuff of true beliefs (reason, empiricism) are the only tools we have when trying to figure out what wins and what doesn’t. To adopt a given useful-but-otherwise-arbitrary belief U, I first need a true belief T that U is useful. Your position seems to be that U trumps T because of its intrinsic usefulness. My position is that T trumps U because U is inaccessible without T. I don’t see any other way to reliably arrive at U instead of ~U or V. I am reminded of the Library of Babel.
I said “for all that” is not “for all of”. Very different meaning. “For all that” means something like “despite the fact that”, or “Although”. I.e., “although the teachers of practical arts are often deluded, they at least have the comfort of systematized winning.” What’s more, it’s you who said “self-described”—I referred only to people who have some systematized winning.
See, that’s the sort of connotation I find interesting. How is “snake oil charlatan” connected to having things go well and wanting to capitalize on it? Would you want to be taught by someone who didn’t have things go well for them? And if they didn’t want to capitalize on it in some fashion, why would they be teaching it? (Even if the only capitalization taking place is that they enjoy teaching!)
If you break down what you’ve just said, it should be easy to see why I think this sort of “thinking” is just irrationally-motivated reaction—the firing off “boo” lights in response to certain buttons being pushed.
No—I’m saying that the simplest way to assess belief U is to try acting as if it were true. In fact, the ONLY way to assess the usefulness of U is to have one or more persons act as if it were true. Because without that, you aren’t really testing U, you’re testing U+X, where X is whatever else it is you believe about U, like, “I’m going to see if this works”, or “I think this is stupid”.
Good epistemic hygiene in testing the usefulness of a belief requires that you not contaminate your test chamber with other beliefs.
Now, that may sound like a defense of psychic phenomena. But it isn’t. You don’t need an absence of skepticism from the overall proceedings, only a temporary absence of skepticism in the performer.
And the measurement of the performer’s results can be as objective and skeptical as you like. (Although, for processes whose intent is also subjective—i.e., to make you feel better about life or be more motivated—then only the subjective experiencer can measure that bit, of course.)
Sometimes, I get clients who will say something like, “Well, I felt better, but how do I know I wasn’t just imagining it?”, and I have to separate out the confusion. Because what they’re really saying is, “At time X I felt good, but now at time Y I’m thinking that maybe it wasn’t real”.
However, if the experiment was “perform procedure Z at time X-1” with a prediction that this will result in a positive experience at time X, then procedure Z worked. And retroactively questioning it is only making you feel bad now—it can’t change how you felt at time X, although it can reconsolidate your memory so it seems like you felt worse at time X.
In other words, it’s questioning yourself afterwards that’s poor epistemic hygiene, because it actually alters your memories. (See all those studies about how asking people leading questions alters their memories.)
This “success at time X, questioning at time Y” pattern is really common among naturally-struggling people. It’s basically the way people prevent themselves from advancing. And it doesn’t matter what procedure Z is—it could be something like making a plan for their day. I’ll ask, “well, did you make a plan?” And they’ll be like, “well, yeah, but what if I didn’t do it right?”
It’s this process of self-questioning that directly results in the problems. If you want to develop skill at something, you can’t tinker with your success criteria after the fact, to make it so that you failed anyway.
Skepticism is useful before you do something, to set up the criteria for measuring something. But it’s not useful while you’re doing the thing, nor after you’ve actually done it.
The irrational fear I keep talking about here is people being attached to the idea that if they refrain from self-questioning of this type, that they will suddenly become delusional theists or something. Which is really ironic, because in the type of situation I’m describing it’s the questioning that creates the delusion, redefining the past to suit the whims of the present.
I did assume you held the position that these people are somehow identifiable. If your point was merely “there exist some people out there who are systematic winners”… then I’m not sure I get your point.
Because “I figured out the key to success, I succeeded, and now I want to share my secrets with you” is the story that sells, regardless of actual prior circumstance or method.
I don’t think you understand why I bring up charlatans. This is a signaling problem. You’re right… I would demand some kind of evidence of success from a teacher. But if these prerequisites are at all easier to come by than the real thing, there’s going to be a lot of faking going on.
My, you are confident in your theories of human motivation. You said (minus subsequent disclaimers, because this is what I was responding to), “teachers of the practical arts [...] have the comfort of systematized winning”. It seems to me that this “comfort” is claimed far out of proportion to its actual incidence, which bears very directly on the whole issue of distinguishing “useful” signal from noise. If you do have legitimate insights, you’re certainly not making yourself any more accessible by pointing to others in the field. If your point was merely “some deluded people win”… then I’m not sure I get your point.
This response isn’t really addressing my point of contention, with the result that I mostly agree with the rest of your comment (sans last paragraph). So I’ll try to explain what I mean by “T”. You say “skepticism is useful before you do something”, and it’s precisely this sort of skepticism that T represents. You leapt straight into explaining how I’ve just got to embrace U in order to make it work, but that doesn’t address why I’m even considering U in the first place. Hence “I first need a true belief T that U is useful”. Pardon me for a moment while I look into how useful it is to believe I’m a goat.
Again, I think you’re overstating this fear, but now that you mention theism, I can’t help but notice that all of the arguments you just gave (that I pretty much agree with) for unquestioningly accepting a belief you’ve already decided to experimentally swallow… work equally well for theism. So what is it exactly, if not some flavor of T, that allows me to distinguish between the two?
Well, in the case of at least marketing and pickup, you can generally observe the teacher’s own results, as long as you’re being taught directly. For acting, you could observe the ability of the teacher’s students. Copywriting teachers (people who teach the writing of direct marketing ads) can generally give sales statistics comparisons of their improvements over established “controls”. (Btw, in the direct marketing industry, the “control” is just whatever ad you’re currently using; it’s not a control condition where you don’t advertise or run a placebo ad!)
IOW, the practical arts of persuasion and belief do involve at least some empirical basis. One might quibble about what great or excellent acting or pickup might be, but anybody can tell bad acting or failed pickup. And marketing is measurable in dollars spent and actions taken. Marketers don’t always understand math or how to use it, but they’re motivated to use statistical tools for split-testing.
The ancient Greeks thought fire was an element, but that didn’t stop them from using fire. Developing a practical model and a “true” theory are quite often independent things. My point is that you don’t need a true theory to build useful models, or to learn and use them. And in most practical arts related to belief or persuasion, you will need to “act as if” certain beliefs are true, whether or not they are, because those beliefs nonetheless represent a model for reproducing behaviors that produce results under some set of circumstances.
For example, Seth Roberts’ theory of calorie-flavor association is probably not entirely true—but acting as if it were true produces results for some people under some circumstances. This represents progress, not failure.
Right—and my process for that, with respect to self-help techniques, is mainly to look at the claims for a technique, and sort for ones that can be empirically verified and claim comparable or improved benefits relative to the ones that I’ve already tried. Assuming that the cost in time to learn the technique is reasonable (say, a few hours), and it can be implemented and tested quickly, that’s sufficient T probability for me to engage in a test.
Religion doesn’t claim repeatable empirical benefits—in fact they pretty carefully disclaim any. Zen is one of the few religions that contain procedures with claimed empirical benefits (e.g. meditation producing improved concentration and peace of mind), and those claims have actually held up pretty well under scientific investigation as well as my personal experimentation.
So, for me at least, your “T” consists mostly of claimed empirical benefits via a repeatable procedure capable of very short evaluation times—preferably suitable for immediate evaluation of whether something worked or it didn’t.
I do have two things that most people evaluating such things don’t. At first, I tried a lot of these same techniques before I understood monoidealism and somatic markers, and couldn’t get them to work. But once I had even the rudiments of those ideas—not as theory but as experience—I got many of the same things to work quite well.
That suggests very strongly to me that the major hidden variable in interpersonal variation of self-help technique applicability has less to do with the techniques themselves or any inherent property of the learner, than whether or not they’ve learned to distinguish conscious and unconcsious thoughts, and their abstract conception of an emotion or event from from its physical representation as a body sensation or as an internal image or sound. Most people (IME) seem to naturally confuse their internal narration about their experiences, and the experiences themselves. (Sort of like in “Drawing On The Right Side Of The Brain”, where people confuse their symbols or abstractions for faces and hair with what they’re actually seeing.)
Separating these things out are the primary skills I teach (as a vehicle to make other self-help techniques accessible) and many people require some sort of live feedback in order to learn them. There is some mild anecdotal evidence that prior experience with meditation helps—i.e. the students who pick them up faster seem somewhat more likely to report prior meditation experience. But I haven’t even tried to be rigorous about investigating that, since even non-meditators can learn the skill.
(Hm, now that I’ve written this, though, I wonder whether some of the Drawing On The Right Side Of The Brain exercises might be helpful in teaching these skills. I’ll have to look into that.)
If you look closely at what I said, I was explaining why I thought what I thought about your response, not saying that my thought was correct; I just wanted to explain why I had the impression that I did, not justify the impression or argue that it was actually true. That’s a subtlety that’s hard to convey in text, I suppose.