Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?
One problem with interfacing formal/mathematical rationality with any “art that works”, whether it’s self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb’s problem, in a sense. What you predict will happen makes a difference to the outcome.
One of the recent paradigm shifts that’s been happening in the last few years in the “seduction community” is the realization that using routines and patterns leads to state-dependence: that is, to a guy’s self-esteem depending on the reactions of the women he’s talked to on a given night. This has led to the rise of the “natural” movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.
Now, I’m not actually involved in the community; I’m quite happily married. However, I pay attention to developments in that field because it has huge overlap with the self-help field, and I’ve gotten many insights about how status perception can influence your behavior—even when there’s nobody else in the room but yourself.
I wandered off point a little there, so let me try and bring it back. The OB/LW approach to rationality—at least as I’ve seen it—is extremely “outside view”-oriented when it comes to people. There’s lots of writing about how people do this or that, rather than looking at what happens with one individual person, on the inside.
Whereas the “arts that work” are extremely focused on an inside view, and actually learning them requires a dedication to action over theory, and taking that action whether you “believe” in the theory or not. In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The “truth” of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.
When I read self-help books in the past, I used to ignore things if I didn’t agree with their theories or saw holes in them. Now, I simply TRY what they say to do, and stick with it until I get a result. Only then do I evaluate. Anything else is idiotic, if your goal is to learn… and win.
Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math—not real-world performance or self-experimentation.
In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The “truth” of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.… Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math—not real-world performance or self-experimentation.
Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven’t build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there’re obvious reasons to expect it to work, we probably will.
Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as “true in general”), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label “true” attached. But is it really so important that your models be labeled “true”? Maybe you could share your models as thinking gimmicks: “I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results”, and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you’ve actually seen, and share data (introspective data counts); (b) label your “believe this and it’ll work” models as candidate “believe this and it’ll work” models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.
In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use (“I’m not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful”), there’ll be less miscommunication, I think.
we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models.
Yeah, I’ve noticed that, which is why my comment history contains so many posts pointing out that I’m an instrumental rationalist, rather than an epistemic one. ;-)
I’m not sure it’s about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.
Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both “true models” and “models that are pragmatically useful but that probably aren’t fully true”.
You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn’t interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.
When you try to share techniques with LW, and LW balks… part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so “is meant as a working model” isn’t at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)
In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The “truth” of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.… Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math—not real-world performance or self-experimentation.
Trying to interpret this charitably, I’ll suggest a restatement: what you call a “theory” is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word “reason”, for example, connotes more with rationalization than with anything else.
what you call a “theory” is actually an algorithm that describes the actions that are known to achieve the required results.
No, I’m using the term “theory” in the sense of “explanation” and “as opposed to practice”. The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be.
In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals.
I don’t see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a “proto-truth” rather than an “absolute truth”. Something that we treat as if it were true, until something better is found.
Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge.
Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the “parts” metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.)
Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about “affective asynchrony”, and saw that it not only experimentally validated some of what I’ve been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, “see, science!”)
Similar thing for “reconsolidation”—it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no “theoretical” justification for. (I just taught this requirement without any explanation except “that’s how these techniques work”.)
There seems to be a background attitude on LW though, that this sort of gradual approximation is somehow wrong, because I didn’t wait for a “true” theory in a peer-reviewed article before doing anything.
In practice, however, if I waited for the theory to be true instead of useful, I would never have been able to gather enough experience to make good theories in the first place.
One problem with interfacing formal/mathematical rationality with any “art that works”, whether it’s self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb’s problem, in a sense. What you predict will happen makes a difference to the outcome.
One of the recent paradigm shifts that’s been happening in the last few years in the “seduction community” is the realization that using routines and patterns leads to state-dependence: that is, to a guy’s self-esteem depending on the reactions of the women he’s talked to on a given night. This has led to the rise of the “natural” movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.
Now, I’m not actually involved in the community; I’m quite happily married. However, I pay attention to developments in that field because it has huge overlap with the self-help field, and I’ve gotten many insights about how status perception can influence your behavior—even when there’s nobody else in the room but yourself.
I wandered off point a little there, so let me try and bring it back. The OB/LW approach to rationality—at least as I’ve seen it—is extremely “outside view”-oriented when it comes to people. There’s lots of writing about how people do this or that, rather than looking at what happens with one individual person, on the inside.
Whereas the “arts that work” are extremely focused on an inside view, and actually learning them requires a dedication to action over theory, and taking that action whether you “believe” in the theory or not. In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The “truth” of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.
When I read self-help books in the past, I used to ignore things if I didn’t agree with their theories or saw holes in them. Now, I simply TRY what they say to do, and stick with it until I get a result. Only then do I evaluate. Anything else is idiotic, if your goal is to learn… and win.
Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math—not real-world performance or self-experimentation.
Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven’t build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there’re obvious reasons to expect it to work, we probably will.
Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as “true in general”), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label “true” attached. But is it really so important that your models be labeled “true”? Maybe you could share your models as thinking gimmicks: “I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results”, and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you’ve actually seen, and share data (introspective data counts); (b) label your “believe this and it’ll work” models as candidate “believe this and it’ll work” models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.
In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use (“I’m not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful”), there’ll be less miscommunication, I think.
Yeah, I’ve noticed that, which is why my comment history contains so many posts pointing out that I’m an instrumental rationalist, rather than an epistemic one. ;-)
I’m not sure it’s about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.
Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both “true models” and “models that are pragmatically useful but that probably aren’t fully true”.
You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn’t interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.
When you try to share techniques with LW, and LW balks… part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so “is meant as a working model” isn’t at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)
Trying to interpret this charitably, I’ll suggest a restatement: what you call a “theory” is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word “reason”, for example, connotes more with rationalization than with anything else.
No, I’m using the term “theory” in the sense of “explanation” and “as opposed to practice”. The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be.
I don’t see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a “proto-truth” rather than an “absolute truth”. Something that we treat as if it were true, until something better is found.
Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge.
Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the “parts” metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.)
Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about “affective asynchrony”, and saw that it not only experimentally validated some of what I’ve been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, “see, science!”)
Similar thing for “reconsolidation”—it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no “theoretical” justification for. (I just taught this requirement without any explanation except “that’s how these techniques work”.)
There seems to be a background attitude on LW though, that this sort of gradual approximation is somehow wrong, because I didn’t wait for a “true” theory in a peer-reviewed article before doing anything.
In practice, however, if I waited for the theory to be true instead of useful, I would never have been able to gather enough experience to make good theories in the first place.