This feels like motivated stopping to me. Scott also constantly talks about how studies never show the thing they claim to show, etc. This is a meta-analysis of studies where, I presume, people did things like get someone to teach a bunch of random people meditation, then measured the effects. If this didn’t result in any noticeable improvements, it could just mean that it’s hard to teach people meditation (which seems clearly true; another domain where there’s many things to Goodhart on); I think has very little bearing on the question of whether rationalists should or should not, for example, try out 10-day silent meditation retreats.
Similarly I expect that if we did RCTs on the effects of reading the Sequences we would find that they had very little effect, because most people aren’t prepared to understand what the Sequences have to say. That wouldn’t stop me from recommending that people read the Sequences, and it wouldn’t stop you either.
Edit: A better example here is all of the evidence in The Case Against Education that, to first order, people learn nothing in school except maybe basic literacy and numeracy. There are many conclusions you could try drawing from this, but you probably wouldn’t conclude “well, I guess there’s no evidence that history, math, science, etc. work.”
In general if an intervention would only be useful to people with a certain level of cognitive ability or people willing to put in a certain level of effort, I don’t expect an RCT to be capable of detecting this.
(Speaking for myself, I think I personally have gotten several important albeit subtle benefits out of doing mindfulness meditation, although it’s hard to separate the effects of meditation out from all of the other stuff I’ve been doing. I don’t think it would have done very much in isolation, but that’s not how most people historically used meditation anyway: historically it’s been a component of a larger spiritual practice.)
I have complicated opinions here, and don’t have time to explain all of them, so here is a quick glimpse. I wrote this quickly and in one go, so things might have varying levels of being reflectively endorsed:
I think the evidence for mathematics and formal reasoning being an effective cognitive tool is very strong, and that we should have high priors on the sequences being effective because it tries to teach those tools. The evidence for them being effective comes from the overwhelming success of quantitative methods in physics and engineering and many many other applications. I think similar things are true about the other things the sequences are trying to teach, such as getting an intuitive grasp of probabilities, studying cognitive biases, etc. which have shown up in the thinking styles of almost all good scientists I’ve studied.
I think mysticism mostly lacks this evidence-base. I think the overwhelming success of organized religion points to it being something that might be useful for collective action reasons, but not as something particularly useful for figuring things out about the world, or building things, or generally getting things done. As I think most discussions about mysticism have emphasized on the site, I am interested in evidence that there are large parts of society that have benefitted from mysticism in the way they benefited from quantitative reasoning (in the domain of getting things done, not in the domain of feeling good), or at least individuals who seem to have performed impressive feats I clearly care about.
I think for things that lack evidence as strong as the scientific revolution for its effectiveness, I think it makes sense to only look for a little while and mostly stop when they start looking as murky as everything else that claims to help people in some way or another. I think a lot of people were interested because the evidence on mindfulness was one promising sign of effectiveness, and now that that’s much murkier again, it seems very reasonable for me to stop investing resources into trying to understand this. I personally don’t think mindfulness was ever a crux for me, but it seems reasonable that it would be for others.
@Qiaochu: I would be interested in Double Cruxing about this sometime, and then maybe writing down the results from our conversation. If you are up for that.
Edit: After writing this, I think I overstated my claim above. I think there is a strong a priori argument for why phenomenology is very important, and studying how your own mind works, but I guess I am mostly not convinced that mysticism and circling and Kenshō and any of the other things that have recently been brought up are actually good at that. I think circling has the strongest argument going for it, and broad mysticism probably has some useful tools buried in a giant pile of epistemically hazardous concepts, but I can imagine changing my mind on that. I also think that in the frame of phenomenology, mathematics is one of the most powerful tools that is really good at helping you develop deeper understanding of intuitive concepts that you have, and I am much more interested in tools that look like math, than I am in tools that look like chakra.
I think the evidence for mathematics and formal reasoning being an effective cognitive tool is very strong, and that we should have high priors on the sequences being effective because it tries to teach those tools.
Effective for who? And what are your priors on how easy it is to teach things, in general? There’s a big difference between “tries to teach X” and “successfully teaches X.”
I think meditation and mysticism completely lack this evidence-base.
I’m not sure what you mean by this, mostly because I don’t know what you think my position is on what meditation and/or mysticism can do for people. Most of my comment is arguing against what I take to be a bad argument; the only place where I describe my position is the very end, where I only describe my own experience in vague terms and don’t make any claims about what other people might or might not get out of meditation.
or at least individuals who seem to have performed impressive feats I clearly care about.
I’m about to run an experiment at the CFAR office in a few weeks where I try to teach a bunch of people how to strengthen their ability to acquire trustworthy inside views / gears models through learning mathematics—exactly the sort of thing you think is important.
I would not have been capable of doing this in any of the previous years you’ve known me, because I spent nearly all of those years crippled by social fears that prevented me from doing anything like organizing events on my own. I’ve been working through these fears in all sorts of ways, but mostly through circling and tantra (although at the end improving my diet seems to have been key as well; I still don’t have a great gears model of what’s been happening to me). Some of the parts that sounded like woo were actually important and taught me actual skills that I actually use to resolve my emotional bugs.
and now that that’s much murkier again, it seems very reasonable for me to stop investing resources into trying to understand this.
Yeah, that’s fine, I can’t tell people what tradeoffs to make. The thing that bugs me about cousin_it’s attitude here and in general is the dismissiveness; not just “hey, looks like the outside view evidence isn’t that strong here, so inside view or bust, I guess” but “this thing is stupid and low-status and anyone who likes it is probably also stupid and low-status, after all, a meta-analysis said so.”
Like, I can’t tell people what CoZE experiments to run, but I’m going to continue to object to people doing what feels to me like trying to make particular classes of illegible CoZE experiments low-status to run. I think illegible CoZE experiments are important and that there are large classes of bugs you basically can’t solve any other way.
Some people have the great fortune of being able to do what they want without ever running into such a bug; good for them, but I don’t want those people dismissing the experiences of people who aren’t so lucky.
@Qiaochu: I would be interested in Double Cruxing about this sometime, and then maybe writing down the results from our conversation. If you are up for that.
Ah, yes. Sorry. The motivation for writing this comment was less ”Qiaochu is wrong about something and here is why” and more “I feel something was slightly off with the framing in that two comment exchange, and here is how the trade off looks to me internally”. I should have made that clearer (writing on my phone seems to have some drawbacks I haven’t properly considered).
I basically agree with you that the dismissive framing seems pretty bad, and not super productive for the discussion, but emotionally seems like a fine attitude to have (I just think it starts having externalities when that attitude shapes the discourse significantly, e.g. by labeling things as low-status).
Thanks for sharing the social anxiety thing. I do actually think that that’s pretty good evidence, and I am interested in hearing more about your models of what drove that change.
I guess circling was always sold to me as a meditation practice specifically designed to train metacognition (with a focus on social metacognition). So it feels to me like it’s roughly in that category.
I am interested in evidence that there are large parts of society that have benefitted from mysticism in the way they benefited from quantitative reasoning (in the domain of getting things done, not in the domain of feeling good), or at least individuals who seem to have performed impressive feats I clearly care about.
Does “being happier” count as a feat that you care about?
This feels like motivated stopping to me. Scott also constantly talks about how studies never show the thing they claim to show, etc. This is a meta-analysis of studies where, I presume, people did things like get someone to teach a bunch of random people meditation, then measured the effects. If this didn’t result in any noticeable improvements, it could just mean that it’s hard to teach people meditation (which seems clearly true; another domain where there’s many things to Goodhart on); I think has very little bearing on the question of whether rationalists should or should not, for example, try out 10-day silent meditation retreats.
Similarly I expect that if we did RCTs on the effects of reading the Sequences we would find that they had very little effect, because most people aren’t prepared to understand what the Sequences have to say. That wouldn’t stop me from recommending that people read the Sequences, and it wouldn’t stop you either.
Edit: A better example here is all of the evidence in The Case Against Education that, to first order, people learn nothing in school except maybe basic literacy and numeracy. There are many conclusions you could try drawing from this, but you probably wouldn’t conclude “well, I guess there’s no evidence that history, math, science, etc. work.”
In general if an intervention would only be useful to people with a certain level of cognitive ability or people willing to put in a certain level of effort, I don’t expect an RCT to be capable of detecting this.
(Speaking for myself, I think I personally have gotten several important albeit subtle benefits out of doing mindfulness meditation, although it’s hard to separate the effects of meditation out from all of the other stuff I’ve been doing. I don’t think it would have done very much in isolation, but that’s not how most people historically used meditation anyway: historically it’s been a component of a larger spiritual practice.)
I have complicated opinions here, and don’t have time to explain all of them, so here is a quick glimpse. I wrote this quickly and in one go, so things might have varying levels of being reflectively endorsed:
I think the evidence for mathematics and formal reasoning being an effective cognitive tool is very strong, and that we should have high priors on the sequences being effective because it tries to teach those tools. The evidence for them being effective comes from the overwhelming success of quantitative methods in physics and engineering and many many other applications. I think similar things are true about the other things the sequences are trying to teach, such as getting an intuitive grasp of probabilities, studying cognitive biases, etc. which have shown up in the thinking styles of almost all good scientists I’ve studied.
I think mysticism mostly lacks this evidence-base. I think the overwhelming success of organized religion points to it being something that might be useful for collective action reasons, but not as something particularly useful for figuring things out about the world, or building things, or generally getting things done. As I think most discussions about mysticism have emphasized on the site, I am interested in evidence that there are large parts of society that have benefitted from mysticism in the way they benefited from quantitative reasoning (in the domain of getting things done, not in the domain of feeling good), or at least individuals who seem to have performed impressive feats I clearly care about.
I think for things that lack evidence as strong as the scientific revolution for its effectiveness, I think it makes sense to only look for a little while and mostly stop when they start looking as murky as everything else that claims to help people in some way or another. I think a lot of people were interested because the evidence on mindfulness was one promising sign of effectiveness, and now that that’s much murkier again, it seems very reasonable for me to stop investing resources into trying to understand this. I personally don’t think mindfulness was ever a crux for me, but it seems reasonable that it would be for others.
@Qiaochu: I would be interested in Double Cruxing about this sometime, and then maybe writing down the results from our conversation. If you are up for that.
Edit: After writing this, I think I overstated my claim above. I think there is a strong a priori argument for why phenomenology is very important, and studying how your own mind works, but I guess I am mostly not convinced that mysticism and circling and Kenshō and any of the other things that have recently been brought up are actually good at that. I think circling has the strongest argument going for it, and broad mysticism probably has some useful tools buried in a giant pile of epistemically hazardous concepts, but I can imagine changing my mind on that. I also think that in the frame of phenomenology, mathematics is one of the most powerful tools that is really good at helping you develop deeper understanding of intuitive concepts that you have, and I am much more interested in tools that look like math, than I am in tools that look like chakra.
Effective for who? And what are your priors on how easy it is to teach things, in general? There’s a big difference between “tries to teach X” and “successfully teaches X.”
I’m not sure what you mean by this, mostly because I don’t know what you think my position is on what meditation and/or mysticism can do for people. Most of my comment is arguing against what I take to be a bad argument; the only place where I describe my position is the very end, where I only describe my own experience in vague terms and don’t make any claims about what other people might or might not get out of meditation.
I’m about to run an experiment at the CFAR office in a few weeks where I try to teach a bunch of people how to strengthen their ability to acquire trustworthy inside views / gears models through learning mathematics—exactly the sort of thing you think is important.
I would not have been capable of doing this in any of the previous years you’ve known me, because I spent nearly all of those years crippled by social fears that prevented me from doing anything like organizing events on my own. I’ve been working through these fears in all sorts of ways, but mostly through circling and tantra (although at the end improving my diet seems to have been key as well; I still don’t have a great gears model of what’s been happening to me). Some of the parts that sounded like woo were actually important and taught me actual skills that I actually use to resolve my emotional bugs.
Yeah, that’s fine, I can’t tell people what tradeoffs to make. The thing that bugs me about cousin_it’s attitude here and in general is the dismissiveness; not just “hey, looks like the outside view evidence isn’t that strong here, so inside view or bust, I guess” but “this thing is stupid and low-status and anyone who likes it is probably also stupid and low-status, after all, a meta-analysis said so.”
Like, I can’t tell people what CoZE experiments to run, but I’m going to continue to object to people doing what feels to me like trying to make particular classes of illegible CoZE experiments low-status to run. I think illegible CoZE experiments are important and that there are large classes of bugs you basically can’t solve any other way.
Some people have the great fortune of being able to do what they want without ever running into such a bug; good for them, but I don’t want those people dismissing the experiences of people who aren’t so lucky.
Sure, I’d be interested. Send me an email.
Ah, yes. Sorry. The motivation for writing this comment was less ”Qiaochu is wrong about something and here is why” and more “I feel something was slightly off with the framing in that two comment exchange, and here is how the trade off looks to me internally”. I should have made that clearer (writing on my phone seems to have some drawbacks I haven’t properly considered).
I basically agree with you that the dismissive framing seems pretty bad, and not super productive for the discussion, but emotionally seems like a fine attitude to have (I just think it starts having externalities when that attitude shapes the discourse significantly, e.g. by labeling things as low-status).
Thanks for sharing the social anxiety thing. I do actually think that that’s pretty good evidence, and I am interested in hearing more about your models of what drove that change.
Really? Over and above meditation practices that are about training metacognition?
I guess circling was always sold to me as a meditation practice specifically designed to train metacognition (with a focus on social metacognition). So it feels to me like it’s roughly in that category.
Does “being happier” count as a feat that you care about?