This post seems to be implying that “salvage epistemology” is somehow a special mode of doing epistemology, and that one either approaches woo from a frame of uncritically accepting it (clearly bad) or from a frame of salvage epistemology (still possibly bad but not as clearly so).
But what’s the distinction between salvage epistemology and just ordinary rationalist epistemology?
When I approach woo concepts to see what I might get out of them, I don’t feel like I’m doing anything different than when I do when I’m looking at a scientific field and seeing what I might get out of it.
In either case, it’s important to remember that hypotheses point to observations and that hypotheses are burdensome details. If a researcher publishes a paper saying they have a certain experimental result, then that’s data towards something being true, but it would be dangerous to take their interpretation of the results—or for that matter the assumption that the experimental results are what they seem—as the literal truth. In the same way, if a practitioner of woo reports a certain result, that is informative of something, but that doesn’t mean the hypothesis they are offering to explain it is true.
In either case, one needs to separate “what the commonly offered narratives are” from “what would actually explain these results”. And I feel like exactly the same epistemology applies, even if the content is somewhat different.
Indeed. I left a comment on the Facebook version of this basically saying “it’s all hermeneutics unless you’re just directly experiencing the world without conceptions, so worrying about woo specifically is worrying about the wrong frame”.
I think the incentives in science and woo are different. Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something. You probably won’t make a career in science by merely saying “all my fellow scientists are right”, but I don’t see how agreeing with fellow astrologers would harm your career in astrology.
But what’s the distinction between salvage epistemology and just ordinary rationalist epistemology?
An ordinary rationalist will consider some sources more reliable, and some other sources less reliable. For example, knowing that 50% of findings in some field don’t replicate, is considered bad news.
Someone who wants to “salvage” e.g. Buddhism is privileging a source that has a replication rate way below 50%.
I think the incentives in science and woo are different.
I agree, though I’m not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.
Someone who wants to “salvage” e.g. Buddhism is privileging a source that has a replication rate way below 50%.
I’m not sure if it really makes sense to think in terms of salvaging “Buddhism”, or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others.
E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the “replication rate of Buddhism” seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that would be a weird thing to do. We evaluate any claim on the basis of several factors, such as what we know about the process that generated them, how compatible they are with our other beliefs, how useful they would be for explaining experiences we’ve had, what success similar claims have shown in helping us get better at something we care about, etc.. And then some other claim that’s superficially associated with the same source (e.g. “Buddhism”) might end up scoring so differently on those factors that it doesn’t really make sense to think of them as even being related.
Even if you were looking at a scientific field that was known to have a very low replication rate, there might be some paper that seemed more likely to be true and also relevant for things that you cared about. Then it would make sense to take the claims in that paper and use them to draw conclusions that were as strong or weak as warranted, based on that specific paper and everything else you knew.
Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is “no-self”, and the other concludes that there is “all-self”, or some other similarly nebulous claim.
How certain do you feel that in the other universe you would evaluate the claim and say: “wrong”? As opposed to finding a different interpretation why the other conclusion is also true.
Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about “all-self” rather than “no-self”. AFAICT, the framings sound a little different but are actually equivalent: e.g. there’s not much difference between saying “there is no unique seat of self in your brain that would one could point at and say that it’s the you” and “you are all of your brain”.
There’s more to this than just that, given that talking in terms of the brain etc. isn’t what a lot of Buddhists would do, but that points at the rough gist of it and I guess you’re not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out, that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are.
More broadly, a lot of my interpretation of “no-self” isn’t actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn’t really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn’t explain, that I started thinking that maybe there’s actually something there.
So when I talk about “no-self”, it’s not so much that “I read about this Buddhist thing and then started talking about their ideas about no-self”; it’s more like “I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like ‘no-self’ would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the Buddhists meant by no-self, and also noticed that some of their theories now started feeling like they made more sense and could help explain my experiences while also being compatible with what I knew about neuroscience and cognitive science”.
Also you say that there being no-self is a “nebulous” claim, but I don’t think I have belief in a nebulous and ill-defined claim. I have belief in a set of specific concrete claims, such as “there’s no central supreme leader agent running things in the brain, the brain’s decision-making works by a distributed process that a number of subsystems contribute to and where very different subsystems can be causally responsible for a person’s actions at different times”. “No-self” is then just a label for that cluster of claims. But the important thing are the claims themselves, not whether there’s some truth of “no-self” in the abstract.
So if I slightly rephrased your question as something like “how certain am I that in an alternate universe where Buddhism made importantly wrong claims, I would evaluate them as wrong?”. Then reasonably certain, given that I currently already only put high probability in those Buddhist claims for which I have direct evidence for, put a more moderate probability for claims I don’t have direct evidence for but have heard from meditators who have seemed sane and reliable so far, and disbelieve in quite a few ones that I don’t think I have good evidence for and which contradict what I know about reality otherwise. (Literal karma or reincarnation, for instance.) Of course, I don’t claim to be infallible and do expect to make errors (in both directions), but again that’s the case with any field.
Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something.
When aspiring rationalists interact with science, it’s not just believing whatever 100 scientists agree on. If you take COVID-19 for example, we read a bunch of science, build models in our head about what’s happening and then took action based on those models.
Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something.
It’s not obvious to me this effect dominates over politician punishments for challenging powerful people’s ideas. I definitely think science is more self-correcting than astrology over decades, but don’t trust the process on a year to year basis.
This post seems to be implying that “salvage epistemology” is somehow a special mode of doing epistemology, and that one either approaches woo from a frame of uncritically accepting it (clearly bad) or from a frame of salvage epistemology (still possibly bad but not as clearly so).
But what’s the distinction between salvage epistemology and just ordinary rationalist epistemology?
When I approach woo concepts to see what I might get out of them, I don’t feel like I’m doing anything different than when I do when I’m looking at a scientific field and seeing what I might get out of it.
In either case, it’s important to remember that hypotheses point to observations and that hypotheses are burdensome details. If a researcher publishes a paper saying they have a certain experimental result, then that’s data towards something being true, but it would be dangerous to take their interpretation of the results—or for that matter the assumption that the experimental results are what they seem—as the literal truth. In the same way, if a practitioner of woo reports a certain result, that is informative of something, but that doesn’t mean the hypothesis they are offering to explain it is true.
In either case, one needs to separate “what the commonly offered narratives are” from “what would actually explain these results”. And I feel like exactly the same epistemology applies, even if the content is somewhat different.
Indeed. I left a comment on the Facebook version of this basically saying “it’s all hermeneutics unless you’re just directly experiencing the world without conceptions, so worrying about woo specifically is worrying about the wrong frame”.
I think the incentives in science and woo are different. Scientists are rewarded for discovering new things, or finding an error in existing beliefs, so if 100 scientists agree on something, that probably means more than if 100 astrologers agree on something. You probably won’t make a career in science by merely saying “all my fellow scientists are right”, but I don’t see how agreeing with fellow astrologers would harm your career in astrology.
An ordinary rationalist will consider some sources more reliable, and some other sources less reliable. For example, knowing that 50% of findings in some field don’t replicate, is considered bad news.
Someone who wants to “salvage” e.g. Buddhism is privileging a source that has a replication rate way below 50%.
I agree, though I’m not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.
I’m not sure if it really makes sense to think in terms of salvaging “Buddhism”, or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others.
E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the “replication rate of Buddhism” seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that would be a weird thing to do. We evaluate any claim on the basis of several factors, such as what we know about the process that generated them, how compatible they are with our other beliefs, how useful they would be for explaining experiences we’ve had, what success similar claims have shown in helping us get better at something we care about, etc.. And then some other claim that’s superficially associated with the same source (e.g. “Buddhism”) might end up scoring so differently on those factors that it doesn’t really make sense to think of them as even being related.
Even if you were looking at a scientific field that was known to have a very low replication rate, there might be some paper that seemed more likely to be true and also relevant for things that you cared about. Then it would make sense to take the claims in that paper and use them to draw conclusions that were as strong or weak as warranted, based on that specific paper and everything else you knew.
Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is “no-self”, and the other concludes that there is “all-self”, or some other similarly nebulous claim.
How certain do you feel that in the other universe you would evaluate the claim and say: “wrong”? As opposed to finding a different interpretation why the other conclusion is also true.
(Assuming the same peer pressure, etc.)
(Upvoted.)
Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about “all-self” rather than “no-self”. AFAICT, the framings sound a little different but are actually equivalent: e.g. there’s not much difference between saying “there is no unique seat of self in your brain that would one could point at and say that it’s the you” and “you are all of your brain”.
There’s more to this than just that, given that talking in terms of the brain etc. isn’t what a lot of Buddhists would do, but that points at the rough gist of it and I guess you’re not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out, that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are.
More broadly, a lot of my interpretation of “no-self” isn’t actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn’t really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn’t explain, that I started thinking that maybe there’s actually something there.
So when I talk about “no-self”, it’s not so much that “I read about this Buddhist thing and then started talking about their ideas about no-self”; it’s more like “I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like ‘no-self’ would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the Buddhists meant by no-self, and also noticed that some of their theories now started feeling like they made more sense and could help explain my experiences while also being compatible with what I knew about neuroscience and cognitive science”.
Also you say that there being no-self is a “nebulous” claim, but I don’t think I have belief in a nebulous and ill-defined claim. I have belief in a set of specific concrete claims, such as “there’s no central supreme leader agent running things in the brain, the brain’s decision-making works by a distributed process that a number of subsystems contribute to and where very different subsystems can be causally responsible for a person’s actions at different times”. “No-self” is then just a label for that cluster of claims. But the important thing are the claims themselves, not whether there’s some truth of “no-self” in the abstract.
So if I slightly rephrased your question as something like “how certain am I that in an alternate universe where Buddhism made importantly wrong claims, I would evaluate them as wrong?”. Then reasonably certain, given that I currently already only put high probability in those Buddhist claims for which I have direct evidence for, put a more moderate probability for claims I don’t have direct evidence for but have heard from meditators who have seemed sane and reliable so far, and disbelieve in quite a few ones that I don’t think I have good evidence for and which contradict what I know about reality otherwise. (Literal karma or reincarnation, for instance.) Of course, I don’t claim to be infallible and do expect to make errors (in both directions), but again that’s the case with any field.
Who has tried to replicate it?
When aspiring rationalists interact with science, it’s not just believing whatever 100 scientists agree on. If you take COVID-19 for example, we read a bunch of science, build models in our head about what’s happening and then took action based on those models.
It’s not obvious to me this effect dominates over politician punishments for challenging powerful people’s ideas. I definitely think science is more self-correcting than astrology over decades, but don’t trust the process on a year to year basis.