I think the incentives in science and woo are different.
I agree, though I’m not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.
Someone who wants to “salvage” e.g. Buddhism is privileging a source that has a replication rate way below 50%.
I’m not sure if it really makes sense to think in terms of salvaging “Buddhism”, or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others.
E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the “replication rate of Buddhism” seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that would be a weird thing to do. We evaluate any claim on the basis of several factors, such as what we know about the process that generated them, how compatible they are with our other beliefs, how useful they would be for explaining experiences we’ve had, what success similar claims have shown in helping us get better at something we care about, etc.. And then some other claim that’s superficially associated with the same source (e.g. “Buddhism”) might end up scoring so differently on those factors that it doesn’t really make sense to think of them as even being related.
Even if you were looking at a scientific field that was known to have a very low replication rate, there might be some paper that seemed more likely to be true and also relevant for things that you cared about. Then it would make sense to take the claims in that paper and use them to draw conclusions that were as strong or weak as warranted, based on that specific paper and everything else you knew.
Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is “no-self”, and the other concludes that there is “all-self”, or some other similarly nebulous claim.
How certain do you feel that in the other universe you would evaluate the claim and say: “wrong”? As opposed to finding a different interpretation why the other conclusion is also true.
Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about “all-self” rather than “no-self”. AFAICT, the framings sound a little different but are actually equivalent: e.g. there’s not much difference between saying “there is no unique seat of self in your brain that would one could point at and say that it’s the you” and “you are all of your brain”.
There’s more to this than just that, given that talking in terms of the brain etc. isn’t what a lot of Buddhists would do, but that points at the rough gist of it and I guess you’re not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out, that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are.
More broadly, a lot of my interpretation of “no-self” isn’t actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn’t really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn’t explain, that I started thinking that maybe there’s actually something there.
So when I talk about “no-self”, it’s not so much that “I read about this Buddhist thing and then started talking about their ideas about no-self”; it’s more like “I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like ‘no-self’ would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the Buddhists meant by no-self, and also noticed that some of their theories now started feeling like they made more sense and could help explain my experiences while also being compatible with what I knew about neuroscience and cognitive science”.
Also you say that there being no-self is a “nebulous” claim, but I don’t think I have belief in a nebulous and ill-defined claim. I have belief in a set of specific concrete claims, such as “there’s no central supreme leader agent running things in the brain, the brain’s decision-making works by a distributed process that a number of subsystems contribute to and where very different subsystems can be causally responsible for a person’s actions at different times”. “No-self” is then just a label for that cluster of claims. But the important thing are the claims themselves, not whether there’s some truth of “no-self” in the abstract.
So if I slightly rephrased your question as something like “how certain am I that in an alternate universe where Buddhism made importantly wrong claims, I would evaluate them as wrong?”. Then reasonably certain, given that I currently already only put high probability in those Buddhist claims for which I have direct evidence for, put a more moderate probability for claims I don’t have direct evidence for but have heard from meditators who have seemed sane and reliable so far, and disbelieve in quite a few ones that I don’t think I have good evidence for and which contradict what I know about reality otherwise. (Literal karma or reincarnation, for instance.) Of course, I don’t claim to be infallible and do expect to make errors (in both directions), but again that’s the case with any field.
I agree, though I’m not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.
I’m not sure if it really makes sense to think in terms of salvaging “Buddhism”, or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others.
E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the “replication rate of Buddhism” seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that would be a weird thing to do. We evaluate any claim on the basis of several factors, such as what we know about the process that generated them, how compatible they are with our other beliefs, how useful they would be for explaining experiences we’ve had, what success similar claims have shown in helping us get better at something we care about, etc.. And then some other claim that’s superficially associated with the same source (e.g. “Buddhism”) might end up scoring so differently on those factors that it doesn’t really make sense to think of them as even being related.
Even if you were looking at a scientific field that was known to have a very low replication rate, there might be some paper that seemed more likely to be true and also relevant for things that you cared about. Then it would make sense to take the claims in that paper and use them to draw conclusions that were as strong or weak as warranted, based on that specific paper and everything else you knew.
Imagine two parallel universes, each of them containing a slightly different version of Buddhism. Both versions tell you to meditate, but one of them, for example, concludes that there is “no-self”, and the other concludes that there is “all-self”, or some other similarly nebulous claim.
How certain do you feel that in the other universe you would evaluate the claim and say: “wrong”? As opposed to finding a different interpretation why the other conclusion is also true.
(Assuming the same peer pressure, etc.)
(Upvoted.)
Well, that is kind of already the case, in that there are also Buddhist-influenced people talking about “all-self” rather than “no-self”. AFAICT, the framings sound a little different but are actually equivalent: e.g. there’s not much difference between saying “there is no unique seat of self in your brain that would one could point at and say that it’s the you” and “you are all of your brain”.
There’s more to this than just that, given that talking in terms of the brain etc. isn’t what a lot of Buddhists would do, but that points at the rough gist of it and I guess you’re not actually after a detailed explanation. Another way of framing that is what Eliezer once pointed out, that there is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. Similarly, a graph where each vertex is marked as self is kind of equivalent to one where none are.
More broadly, a lot of my interpretation of “no-self” isn’t actually that directly derived from any Buddhist theory. When I was first exposed to such theories, much of their talk about self/no-self sounded to me like the kind of misguided folk speculation of a prescientific culture that didn’t really understand the mind very well yet. It was only when I actually tried some meditative practices and got to observe my mind behaving in ways that my previous understanding of it couldn’t explain, that I started thinking that maybe there’s actually something there.
So when I talk about “no-self”, it’s not so much that “I read about this Buddhist thing and then started talking about their ideas about no-self”; it’s more like “I first heard about no-self but it was still a bit vague what it exactly meant and if it even made any sense, but then I had experiences which felt like ‘no-self’ would be a reasonable cluster label for, so I assumed that these kinds of things are probably what the Buddhists meant by no-self, and also noticed that some of their theories now started feeling like they made more sense and could help explain my experiences while also being compatible with what I knew about neuroscience and cognitive science”.
Also you say that there being no-self is a “nebulous” claim, but I don’t think I have belief in a nebulous and ill-defined claim. I have belief in a set of specific concrete claims, such as “there’s no central supreme leader agent running things in the brain, the brain’s decision-making works by a distributed process that a number of subsystems contribute to and where very different subsystems can be causally responsible for a person’s actions at different times”. “No-self” is then just a label for that cluster of claims. But the important thing are the claims themselves, not whether there’s some truth of “no-self” in the abstract.
So if I slightly rephrased your question as something like “how certain am I that in an alternate universe where Buddhism made importantly wrong claims, I would evaluate them as wrong?”. Then reasonably certain, given that I currently already only put high probability in those Buddhist claims for which I have direct evidence for, put a more moderate probability for claims I don’t have direct evidence for but have heard from meditators who have seemed sane and reliable so far, and disbelieve in quite a few ones that I don’t think I have good evidence for and which contradict what I know about reality otherwise. (Literal karma or reincarnation, for instance.) Of course, I don’t claim to be infallible and do expect to make errors (in both directions), but again that’s the case with any field.