To elaborate a bit where I’m coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as “notice your confusion,” “if you have a gut feeling against something, look into it and don’t just override it,” “expect things to usually add up to normality” that can help us survive losing that immune system. (Advantage of losing it: you can reason!)
My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn’t used to think this but I do now.
An LW post from 2009 that seems relevant (haven’t reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder
I don’t have a complete or principled model of what an epistemic immune system is or ought to be, in the area of woo, but I have some fragments.
One way of looking at it is that we look at a cluster of ideas, form an outside view of how much value and how much crazymaking there is inside it, and decide whether to engage. Part of the epistemic immune system is tracking the cost side of the corresponding cost/benefit. But this cost/benefit analysis doesn’t generalize well between people; there’s a big difference between a well-grounded well-studied practitioner looking at their tenth fake framework, and a newcomer who’s still talking about how they vaguely intend to read the Sequences.
Much of the value, in diving into a woo area, is in the possibility that knowledge can be extracted and re-cast into a more solid form. But the people who are still doing social-mimicking instead of cost/benefit are not going to be capable of doing that, and shouldn’t copy strategies from people who are.
(I am trying not to make this post a vagueblog about On Intention Research, because I only skimmed it and I don’t know the people involved well, so I can’t be sure it fits the pattern, but the parts of it I looked at match what I would expect from a group trying to ingest fake frameworks that they weren’t skilled enough to handle.)
I think there’s something important in the distinction between declarative knowledge and metis, and something particularly odd about the process of extracting metis from an area where you believe all of the declarative knowledge in the vicinity is false. I think when a group does that together, they wind up generating a new thing attached to a dialect full of false rhymes, where they sound like they’re talking about the false things but all the jargon is slightly figurative and slightly askew. When I think of what it’s like to be next to that group but not in it, I think of Mage: The Ascenscion.
Engaging with woo can be a rung on the countersignaling hierarchy: a way to say, look, my mental integrity is so strong that I can have crystal conversations and still somehow stay sane. This is orthogonal to cost/benefit, but I expect anyone doing it for that reason to tell themself a different story. I’m not sure how much of a thing this is, but would be surprised if it wasn’t a thing at all.
I’m skeptical that Leverage’s intention research is well described as them trying to extract wisdom out of an existing framework that someone outside of Leverage created. They were interested in doing original research on related phenomena.
It’s unclear to me how to do a cost-benefit analysis when doing original research in any domain.
If I look at credence calibration as a phenomenon to investigate and do original research that research involves playing around with estimating probabilities it’s hard to know beforehand which exercises will create benefits. Original research involves persuing a lot of strains that won’t pan out.
Credence calibration is similar to the phenomenon of vibes that Leverage studied in the sense that it’s a topic where it’s plausible that some value is gained by understanding the underlying phenomena better. It’s unclear to me how you would do the related cost-benefit analysis because it’s in the nature of doing original research that you don’t really know the fruits of your works beforehand.
To elaborate a bit where I’m coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as “notice your confusion,” “if you have a gut feeling against something, look into it and don’t just override it,” “expect things to usually add up to normality” that can help us survive losing that immune system. (Advantage of losing it: you can reason!)
My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn’t used to think this but I do now.
An LW post from 2009 that seems relevant (haven’t reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder
I don’t have a complete or principled model of what an epistemic immune system is or ought to be, in the area of woo, but I have some fragments.
One way of looking at it is that we look at a cluster of ideas, form an outside view of how much value and how much crazymaking there is inside it, and decide whether to engage. Part of the epistemic immune system is tracking the cost side of the corresponding cost/benefit. But this cost/benefit analysis doesn’t generalize well between people; there’s a big difference between a well-grounded well-studied practitioner looking at their tenth fake framework, and a newcomer who’s still talking about how they vaguely intend to read the Sequences.
Much of the value, in diving into a woo area, is in the possibility that knowledge can be extracted and re-cast into a more solid form. But the people who are still doing social-mimicking instead of cost/benefit are not going to be capable of doing that, and shouldn’t copy strategies from people who are.
(I am trying not to make this post a vagueblog about On Intention Research, because I only skimmed it and I don’t know the people involved well, so I can’t be sure it fits the pattern, but the parts of it I looked at match what I would expect from a group trying to ingest fake frameworks that they weren’t skilled enough to handle.)
I think there’s something important in the distinction between declarative knowledge and metis, and something particularly odd about the process of extracting metis from an area where you believe all of the declarative knowledge in the vicinity is false. I think when a group does that together, they wind up generating a new thing attached to a dialect full of false rhymes, where they sound like they’re talking about the false things but all the jargon is slightly figurative and slightly askew. When I think of what it’s like to be next to that group but not in it, I think of Mage: The Ascenscion.
Engaging with woo can be a rung on the countersignaling hierarchy: a way to say, look, my mental integrity is so strong that I can have crystal conversations and still somehow stay sane. This is orthogonal to cost/benefit, but I expect anyone doing it for that reason to tell themself a different story. I’m not sure how much of a thing this is, but would be surprised if it wasn’t a thing at all.
I’m skeptical that Leverage’s intention research is well described as them trying to extract wisdom out of an existing framework that someone outside of Leverage created. They were interested in doing original research on related phenomena.
It’s unclear to me how to do a cost-benefit analysis when doing original research in any domain.
If I look at credence calibration as a phenomenon to investigate and do original research that research involves playing around with estimating probabilities it’s hard to know beforehand which exercises will create benefits. Original research involves persuing a lot of strains that won’t pan out.
Credence calibration is similar to the phenomenon of vibes that Leverage studied in the sense that it’s a topic where it’s plausible that some value is gained by understanding the underlying phenomena better. It’s unclear to me how you would do the related cost-benefit analysis because it’s in the nature of doing original research that you don’t really know the fruits of your works beforehand.