In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The “truth” of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.… Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math—not real-world performance or self-experimentation.
Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven’t build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there’re obvious reasons to expect it to work, we probably will.
Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as “true in general”), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label “true” attached. But is it really so important that your models be labeled “true”? Maybe you could share your models as thinking gimmicks: “I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results”, and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you’ve actually seen, and share data (introspective data counts); (b) label your “believe this and it’ll work” models as candidate “believe this and it’ll work” models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.
In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use (“I’m not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful”), there’ll be less miscommunication, I think.
we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models.
Yeah, I’ve noticed that, which is why my comment history contains so many posts pointing out that I’m an instrumental rationalist, rather than an epistemic one. ;-)
I’m not sure it’s about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.
Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both “true models” and “models that are pragmatically useful but that probably aren’t fully true”.
You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn’t interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.
When you try to share techniques with LW, and LW balks… part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so “is meant as a working model” isn’t at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)
Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven’t build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there’re obvious reasons to expect it to work, we probably will.
Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as “true in general”), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label “true” attached. But is it really so important that your models be labeled “true”? Maybe you could share your models as thinking gimmicks: “I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results”, and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you’ve actually seen, and share data (introspective data counts); (b) label your “believe this and it’ll work” models as candidate “believe this and it’ll work” models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.
In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we’ll tend to take any claims made without explicit “this is meant as a pragmatically useful working self-model” tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use (“I’m not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful”), there’ll be less miscommunication, I think.
Yeah, I’ve noticed that, which is why my comment history contains so many posts pointing out that I’m an instrumental rationalist, rather than an epistemic one. ;-)
I’m not sure it’s about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.
Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both “true models” and “models that are pragmatically useful but that probably aren’t fully true”.
You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn’t interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.
When you try to share techniques with LW, and LW balks… part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so “is meant as a working model” isn’t at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)