Putting on the Jordan Peterson mask adds two crucial elements that rationalists often struggle with: motivation and meaning.
Holy shit, yes, thank you, this is exactly what has been motivating all of my contributions to LW 2.0. What is even the point of strengthening your epistemics if you aren’t going to then use those strong epistemics to actually do something?
I first read the Sequences 6 years ago, and since then what little world-saving-relevant effort I’ve put in has been entirely other people asking me to join in on their projects. The time I spent doing that (at SPARC, at CFAR workshops, at MIRI workshops) was great; an embarrassing amount of the time I spent not doing that (really, truly embarrassing; amounts of time only a math grad student could afford to spend) was wasted on various forms of escapism (random internet browsing, TV, video games, anime, manga), because I was sad and lonely and put a lot of effort into avoiding having to deal with that (including avoiding debugging it at CFAR workshops because it was too painful to think about). At almost no point did I have the motivation to start a project on my own, and I didn’t.
I’ve been working intently on debugging this for the last year or so and it finally more or less got solved in the beginning of February (I’ve been avoiding saying too much about this directly for the last month because I wanted to check that it’s sticking—so far so good, one month in). I did not solve this problem by getting better at epistemics, at least not in any way that would have been legible to a skeptical rationalist. I solved it by following my instincts towards Circling, Tantra workshops, the Authentic Man Program—all interventions in a cluster that rationalists are only beginning to talk about publicly. (And improving my diet, that mattered too.)
I think there is an entire class of interventions that can radically improve your motivation and your sense that your life has meaning, which actually matters even if all I’m optimizing for is x-risk reduction (which it’s not), and I really worry that 1) some rationalists are cutting themselves off from investigating these interventions because they’re, from my perspective, way too worried about epistemic risks, and 2) this position will become the LW 2.0 default. If that happens I’m going to leave.
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there. It should be exciting for rationalists to learn about powerful tools that carry epistemic risks, because those are precisely the tools that rationalists should be best equipped to use compared to other people! (Separately, I also think that these tools have actually improved my epistemics, by improving my ability to model myself and other people.)
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there.
This is an interesting idea, but how does someone tell whether they’re strong enough to avoid making the common epistemic mistakes when investigating mysticism? For example, if I practice meditation I might eventually start experiencing what Buddists call vipassana (“insight into the true nature of reality”). I don’t know if I’d be able to avoid treating those experiences as some sort of direct metaphysical knowledge as most people apparently do, as opposed to just qualia generated by my brain while it’s operating differently from normal (e.g., while in a state of transient hypofrontality).
There’s probably a number of distinct epistemic risks surrounding mysticism. Bad social dynamics in the face of asymmetric information might be another one. (Access to mystical experiences is hard to verify by third parties but tempting to claim as a marker of social status.) I don’t know how someone or some community could be confident that they wouldn’t fall prey to one of these risks.
Good question. You can test your ability to avoid mistaking strong emotions for strong beliefs in general. For example, when you get very angry at someone, do you reflexively believe that they’re a terrible person? When you get very sad, do you reflexively believe that everything is terrible? When you fall in love with someone, do you reflexively believe that they have only good qualities and no bad qualities? Etc.
I know I keep saying this, but it keeps being true: for me a lot of my ability to do this, and/or my trust in my ability to do this, came from circling, and specifically repeatedly practicing the skill of distinguishing my strong emotional reactions to what was happening in a circle from my best hypotheses about what was happening.
I don’t know how someone or some community could be confident that they wouldn’t fall prey to one of these risks.
I can’t tell people what level of confidence they should want before trying this sort of thing, but I decided based on my instincts that the risk for me personally was low enough relative to the possible benefits that I was going to go for it, and things have been fine as far as my outside view is concerned so far, e.g. my belief in physics as standardly understood has not decreased at all.
Also, to some extent I feel like this argument proves too much. There are epistemic risks associated to e.g. watching well-made TV shows or movies, or reading persuasive writing, and rationalists take on these epistemic risks all the time without worrying about them.
You can test your ability to mistake strong emotions for strong beliefs in general.
How much of this ability is needed in order to avoid taking strong mystical experiences at face value?
I can’t tell people what level of confidence they should want before trying this sort of thing, but I decided based on my instincts that the risk for me personally was low enough relative to the possible benefits that I was going to go for it,
In the comment I was replying to, you were saying that some rationalists are being too risk-averse. It seems like you’re now backing off a bit and just talking about yourself?
and things have been fine so far, e.g. my belief in physics as standardly understood has not decreased at all.
I’m worried that the epistemic risks get stronger the further you go down this path. Have you had any mystical experiences similar to vipassana yet? If not, your continuing belief in physics as standardly understood does not seem to address my worry.
Also, to some extent I feel like this argument proves too much. There are epistemic risks associated to e.g. watching well-made TV shows or movies, or reading persuasive writing, and rationalists take on these epistemic risks all the time without worrying about them.
We do have empirical evidence about how strong these risks are though, and the epistemic risks associated with “investigating mysticism” seem much stronger than those associated with watching well-made TV shows and movies, or reading persuasive writing. There are other activities, for example joining a church (let’s say for the social benefits), that I think have epistemic risks more comparable with investigating mysticism, and rationalists do worry about them.
P.S., After writing the above I viewed some interviews of Jeffery Martin (author of the PNSE paper previously discussed here), and he comes across as not being obviously irrational or epistemically corrupted by his investigations into mystical experiences. For example he claims to have gone through PNSE locations 1 through 4, but unlike the research subjects that he described in the paper, does not seem to have (or managed to overcome) a “tremendous sense of certainty that participants were experiencing a ‘deeper’ or ‘more true’ reality”. However he does seem to consistently oversell the benefits of PNSE relative to his own descriptions in the paper (which I infer from his interviews was largely if not completely written before he “transitioned” to PNSE himself), and this makes me think he still got corrupted in a more subtle way.
I’ve only taken a few steps down the path that Qiaochu is following, but I have a few thoughts regarding epistemic risk-management:
If you’re ever going to investigate any altered-consciousness experiences at all, you’re going to have to take a risk. You can never be 100% sure that something is “epistemically safe”: certainty is impossible and time is limited.
There is clearly an efficient frontier of risk/reward tradeoffs. I’m also a fan of circling, which doesn’t ask you to accept any supernatural claims or dogmas and is incredibly useful for understanding the landscape of human minds. A few circling sessions with seriously strange people can do a lot to cure one of typical mind fallacy. On the other hand, joining Scientology the same week you start experimenting with ayahuasca is probably unwise.
As a community, we can reduce risk by diversifying. Some of us will do LSD, some will do vipassana, some will circle, some will listen to 100 hours of Peterson… We should be able to notice if any particular subgroup are losing their minds. The real danger would occur if all of us suddenly started doing the same thing with no precautions.
What would it look like, if we noticed that a particular subgroup was beginning to lose its mind? I think it might look like a few unusually-rude people calling into question the alleged experiences of that particular subgroup and asking pointed questions about exactly what had happened to them and exactly why they thought themselves better off for it; and like the members of that particular subgroup responding with a combination of indignation and obfuscation: “we’ve definitely been changed for the better, but of course we can’t expect you to understand what it’s like if it hasn’t happened to you, so why do you keep pushing us for details you know we won’t be able to give?” / “I find it very discouraging to get this sort of response, and if it keeps happening I’m going to leave”; and like some of the more community-minded folks objecting to the rudeness of the questioners, observing acerbically that it always seems to be the same people asking those rude questions and wondering whether the emperor really has any clothes, and maybe even threatening to hand out bans.
All of which sounds kinda familiar.
I don’t actually think that … let’s call it the Berkeley School of rationality, though I’m not sure what fraction of its members are actually in Berkeley … is really losing its mind. (I’m not 100% sure it isn’t, though.) And, for the avoidance of doubt, I think it would be a damn shame if LW lost the people who have made noises about possibly leaving if the local community is too rude to them. -- But if it were losing its mind, I think that might well look roughly the way things currently look.
Which, I think, means: maybe we can’t safely assume that if a particular subgroup was losing its mind then we’d notice and take the actions needed for epistemic safety. Because we’re (almost certainly correctly) not rising up and throwing out the Berkeleyans right now, nor would we (probably correctly) even if they got a couple of notches weirder than they are now … but by that point, if they were losing their minds, they would surely be posing a genuine epistemic threat to at least some of the rest of us.
gjm, point well taken. I wonder if it would be easier for people inside or outside Berkeley to spot if anyone there is seriously going off the rails and say something about it.
Anyway, I do want to elaborate a little bit on my “Efficient Frontier” idea. If anyone can build a map of which “mystical experiences” are safe/dangerous/worthwhile/useless and for whom, it should be people like us. I think it’s a worthwhile project and it has to be done communally, given how different each person’s experience may be and how hard it is to generalize.
The main example here is Sam Harris, a hardcore epistemic rationalist who has also spent a lot of time exploring “altered states of consciousness”. He wrote a book about meditation, endorses psychedelics with caveats, is extremely hostile to any and all religions, and probably thinks that Peterson is kinda crazy after arguing with him for four hours. Those are good data points, but we need 20 more Sam Harrises. I’m hoping that LW can be the platform for them.
Perhaps we need to establish some norms for talking about “mystical experiences”, fake frameworks, altered consciousness etc. so that people feel safe both talking and listening.
I was triggered by this initially, but I reread it and you’re making a completely reasonable point. I notice I’m still concerned about the possibility that your reasonable point / motte will be distorted into a less reasonable point / bailey.
“I find it very discouraging to get this sort of response, and if it keeps happening I’m going to leave”
That is not what I said. What I said is that if the pushback I’ve been getting becomes the default on LW 2.0, then I’m going to leave. This is a matter of people deciding what kind of place they want LW 2.0 to be. If they decide that LW 2.0 does not want to be the place for the things I want to talk about, then I’m going to respect that and talk about those things somewhere else. Staying would be unpleasant for everyone involved.
But if it were losing its mind, I think that might well look roughly the way things currently look.
I concede the point. We can try asking what kinds of externally verifiable evidence would distinguish this world from a world in which people like Val and I have been talking about real things which we lack the skill to explain (in a way satisfying to skeptical rationalists) via text. One prediction I’m willing to make is that I’m now more capable of debugging a certain class of thorny emotional bugs, so e.g. I’m willing to predict that over the next few years I’ll help people debug such bugs at CFAR workshops and wherever else, and that those people will at least in expectation be happier, more productive, more willing to work on x-risk or whatever they actually want to do instead, less likely to burn out, etc.
(But, in the interest of trying to be even-handed about possible hypotheses that explain the current state of public evidence, it’s hard to distinguish the above world from a world in which people like Val and I are losing our minds and also becoming more charismatic / better at manipulation.)
I think that perhaps what bothers a lot of rationalists about your (or Valentine’s) assertions is down to three factors:
You don’t tend to make specific claims or predictions. I think you would come off better—certainly to me and I suspect to others—if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special. While these would obviously not constitute strong evidence, a willingness to informally test your ideas would at least signal honest belief.
You seem to make little to no attempt to actually communicate your ideas in words, or even define your concepts in words. Frankly, it continues to strike me as suspicious that you claim difficulty in even analogizing or approximating your ideas verbally. Even something as weak as the rubber-sheet analogy for General Relativity would—once again—signal an honest attempt.
There doesn’t seem to be consistency on the strength of claims surrounding frameworks. As mentioned elsewhere in thread, Valentine seems to claim that mythic mode generated favorable coincidences like he was bribing the DM. Yet other times Valentine seems to stay acknowledge that the narrative description of reality is at best of metaphorical use.
I think that given recent rationalist interest in meditation, fake frameworks, etc., and in light of what seems to be a case of miscommunication and/or under-communication, there should be some attempt to establish a common basis of understanding, so that if someone asks, “Are you saying x?” they can be instantly redirected to a page that gives the relevant definitions and claims. If you view this is as impossible, do you think that that is a fact of your map or of the relevant territory?
Anyway, I really hope everyone can reach a point of mutual intelligibility, if nothing else.
You don’t tend to make specific claims or predictions. I think you would come off better—certainly to me and I suspect to others—if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special.
I have several different responses to this which I guess I’ll also number.
Sure, fine, I’m willing to claim this. Everyone who has interacted with me both in the last month and, say, a year ago will tell you that I am visibly happier and doing more of what I actually want (“productive” can be a loaded term). People can ask Anna, Duncan, Lauren, etc. if they really want. I can also self-report that I’ve engaged in much less escapism (TV, movies, video games, etc.) this month than in most months of the last 5 years, and what little I have engaged in was mostly social.
I would love to be having this conversation; if the responses I’ve been getting had been of the form “hey, you seem to be making these interesting and non-obvious claims, what evidence do you have / what are your models / what are predictions you’re willing to make?” then I would’ve been happy to answer, but instead the responses were of the form “hey, have you considered the possibility that you’re evil?” I have a limited budget of time and attention I’m willing to spend on LW and my subjective experience is that I’ve been spending it putting out fires that other people have been starting. Please, I would love to have the nice conversation where we share evidence and models and predictions while maintaining principle of charity, but right now I mostly don’t have enough trust that I won’t be defected on to do this.
You’ll notice I haven’t written a top-level post about any of these topics. That’s precisely because I’m not yet willing to put in the time and effort necessary to get it up to epistemic snuff. I didn’t want to start this conversation yet; it was started by others and I felt a duty to participate in order to prevent people from idea inoculating against this whole circle of ideas.
You seem to make little to no attempt to actually communicate your ideas in words, or even define your concepts in words.
This seems like an unfair conflation of what happened in the Kensho post and everything else. The Circling post was entirely an attempt to communicate in words! All of these comments are attempts to communicate in words!
Frankly, it continues to strike me as suspicious that you claim difficulty in even analogizing or approximating your ideas verbally. Even something as weak as the rubber-sheet analogy for General Relativity would—once again—signal an honest attempt.
This is exactly what the cell phone analogy in the Kensho post was for, although I also don’t want people to bucket me and Val too closely here; I’m willing to make weaker claims that I think I can explain more clearly, but haven’t done so yet for the reasons described above.
There doesn’t seem to be consistency on the strength of claims surrounding frameworks. As mentioned elsewhere in thread, Valentine seems to claim that mythic mode generated favorable coincidences like he was bribing the DM. Yet other times Valentine seems to stay acknowledge that the narrative description of reality is at best of metaphorical use.
I warned Val that people would be unhappy about this. Here is one story I currently find plausible for explaining at least one form of synchronicity: operating in mythic mode is visible to other humans on some level, and causes them to want to participate in the myth that it looks like you’re in. So humans can sometimes collaboratively generate coincidences as if they were playing out an improv scene, or something. (Weak belief weakly held.)
As for consistency, it’s partly a matter of what level of claim I or anyone else is willing to defend in a given conversation. It may be that my true belief is strong belief A but that I expect it will be too difficult to produce a satisfying case for why I believe A (and/or that I believe that attempting to state A in words will cause it to be misinterpreted badly, or other things like that), so in the interest of signaling willingness to cooperate in the LW epistemic game, I mostly talk about weaker belief A’, which I can defend more easily, but maybe in another comment I instead talk about slightly weaker or slightly stronger belief A″ because that’s what I feel like I can defend that day. Do you really want to punish me for not consistently sticking to a particular level of weakening of my true belief?
If you view this is as impossible, do you think that that is a fact of your map or of the relevant territory?
I think it’s very difficult because of long experiential distances. This is to some extent a fact about my lack of skill and to some extent what I see as a fact about how far away some parts of the territory are from the experience of many rationalists.
Overall, from my point of view there’s a thing that’s happening here roughly analogous to the Hero Licensing dialogue; if I spent all my time defending myself on LW like this instead of just using what I believe my skills to be to do cool stuff, then I won’t ever get around to doing the cool stuff. So at some point I am just going to stop engaging in this conversation, especially if people continue to assume bad faith on the part of people like me and Val, in order to focus my energy and attention on doing the cool stuff.
(This is my second comment on this site, so it is probable that the formatting will come out gross. I am operating on the assumption that it is similar to Reddit, given Markdown)
To be as succinct as possible, fair enough.
I want to have this conversation too! I was trying to express what I believe to be the origins of people’s frustrations with you, not to try to discourage you. Although I can understand how I failed to communicate that.
I am going to wrap this up with the part of your reply that concerns experiential distance and respond to both. I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing. Therefore it seems reasonable to be suspicious of anyone who claims to teach through personal experience. That being said, the experimental spirit suggests the following course of action: get a small group and try to close their experiential gap gradually, while having them extensively document anything they encounter on the way, then publish that for peer analysis and digestion. Of course that relies on more energy and time than you might have.
This seems like an unfair conflation of what happened in the Kensho post and everything else. The Circling post was entirely an attempt to communicate in words! All of these comments are attempts to communicate in words!
On a general level, I totally concede that I am operating from relatively weak ground. It has been a while—or at least felt like a while—since I read any of the posts I mentioned (tacitly or otherwise) with the exception of Kensho, so that is definitely coloring my vision.
If I spent all my time defending myself on LW like this instead of just using what I believe my skills to be to do cool stuff, then I won’t ever get around to doing the cool stuff. So at some point I am just going to stop engaging in this conversation, especially if people continue to assume bad faith on the part of people like me and Val, in order to focus my energy and attention on doing the cool stuff.
I acknowledge that many people are responding to your ideas with unwarranted hostility and forcing you onto the defensive in a way that I know must be draining. So I apologize for essentially doing that in my original reply to you. I think that I, personally, am unacceptably biased against a lot of ideas due to their “flavor” so to speak, rather than their actual strength.
Do you really want to punish me for not consistently sticking to a particular level of weakening of my true belief?
As to consistency, I actually do want to hold you to some standard of strength with respect to beliefs, because otherwise you could very easily make your beliefs unassuming enough to pass through arbitrary filters. I find ideas interesting; I want to know A, not any of its more easily defensible variants. But I don’t want to punish you or do anything that could even be construed as such.
I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing.
I recognize the concern here, but you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It’s really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it. I’m also willing to respect the boundaries of people who don’t think they can handle it. What I don’t want is for those people to typical mind themselves into assuming that because they can’t handle it, no one else can either, and so the only people willing to try must be being epistemically reckless.
Therefore it seems reasonable to be suspicious of anyone who claims to teach through personal experience.
There are plenty of completely mundane skills that can basically only be taught in this way. Imagine trying to teach someone how to play basketball using only text, etc. There’s no substitute for personal experience in many skills, especially those involving the body, and in fact I think this should be your prior. It may not feel like this is the prior but I think this is straight up a mistake; I’d guess that people’s experiences with learning skills here are skewed by 1) school, which heavily skews towards skills that can be learned through text, and 2) the selection effect of being LWers, liking the Sequences, etc. There’s a reason CFAR focuses on in-person workshops instead of e.g. blog posts or online videos.
I acknowledge that many people are responding to your ideas with unwarranted hostility and forcing you onto the defensive in a way that I know must be draining. So I apologize for essentially doing that in my original reply to you. I think that I, personally, am unacceptably biased against a lot of ideas due to their “flavor” so to speak, rather than their actual strength.
Thank you.
As to consistency, I actually do want to hold you to some standard of strength with respect to beliefs, because otherwise you could very easily make your beliefs unassuming enough to pass through arbitrary filters. I find ideas interesting; I want to know A, not any of its more easily defensible variants. But I don’t want to punish you or do anything that could even be construed as such.
Unfortunately my sense is strongly that other people will absolutely punish me for expressing A instead of any of its weaker variants—this is basically my story about what happened to Val in the Kensho post, where Val could have made a weaker and more defensible point (for example, by not using the word “enlightenment”) and chose not to—precisely because my inability to provide a satisfying case for believing A signals a lack of willingness to play the LW epistemic game, which is what you were talking about earlier.
(Umeshism: if you only have beliefs that you can provide a satisfying case for believing on LW, then your beliefs are optimized too strongly for defensibility-on-LW as opposed to truth.)
So I’m just not going to talk about A at all, in the interest of maintaining my cooperation signals. And given that, the least painful way for me to maintain consistency is to not talk about any of the weaker variants either.
you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It’s really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it.
It is probably true that most rationalists could handle it. It is also probably true, however, that people who can’t handle it could end up profoundly worse for the experience. I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits. I also grant that anything is a potential epistemic hazard and that reasoning under uncertainty is kind of why we bother with this site in the first place. This is all to say that I would like to see more evidence of this calculation being done at all, and that if I was not so geographically separated from the LWsphere, I would like to try these experiences myself.
There’s no substitute for personal experience in many skills, especially those involving the body, and in fact I think this should be your prior. It may not feel like this is the prior but I think this is straight up a mistake; I’d guess that people’s experiences with learning skills here are skewed by 1) school, which heavily skews towards skills that can be learned through text, and 2) the selection effect of being LWers, liking the Sequences, etc.
I am not sure that it should be the prior for mental skills however. As you pointed out, scholastic skills are almost exclusively (and almost definitionally) attainable through text. I know that I can and have learned math, history, languages, etc., through reading, and it seems like that is the correct category for Looking, etc., as well (unless I am mistaken about the basic nature of Looking, which is certainly possible).
So I’m just not going to talk about A. And given that, the least painful way for me to maintain consistency is to not talk about any of the weaker variants either.
This is a sad circumstance, I wish it were otherwise, and I understand why you have made the choice you have considering the (rather ironically) immediate and visceral response you are used to receiving.
I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits.
I’m not sure what “endorse” means here. My position is certainly not “everyone should definitely do [circling, meditation, etc.]”; mostly what I have been arguing for is “we should not punish people who try or say good things about [circling, meditation, etc.] for being epistemically reckless, or allege that they’re evil and manipulative solely on that basis, because I think there are important potential benefits worth the potential risks for some people.”
I am not sure that it should be the prior for mental skills however. As you pointed out, scholastic skills are almost exclusively (and almost definitionally) attainable through text. I know that I can and have learned math, history, languages, etc., through reading, and it seems like that is the correct category for Looking, etc., as well (unless I am mistaken about the basic nature of Looking, which is certainly possible).
I still think you’re over-updating on school. For example, why do graduate students have advisors? At least in fields like pure mathematics that don’t involve lab work, it’s plausibly because being a researcher in these fields requires important mental skills that can’t just be learned through reading, but need to be absorbed through periodic contact with the advisor. Great advisors often have great students; clearly something important is being transmitted even if it’s hard to write down what.
My understanding of CFAR’s position is also that whatever mental skills it tries to teach, those skills are much harder to teach via text or even video than via an in-person workshop, and that this is why we focus so heavily on workshops instead of methods of teaching that scale better.
and I understand why you have made the choice you have considering the (rather ironically) immediate and visceral response you are used to receiving.
I know, right? Also ironically, learning how to not be subject to my triggers (at least, not as much as I was before) is another skill I got from circling.
I’m glad you got over the initial triggered-ness. I did wonder about being even more explicit that I don’t in fact think you guys are losing your minds, but worried about the “lady doth protest too much” effect.
I wasn’t (in case it isn’t obvious) by any means referring specifically to you, and in particular the “if it keeps happening I’m going to leave” wasn’t intended to be anything like a quotation from you or any specific other person. It was intended to reflect the fact that a number of people (I think at least three) of what I called the Berkeley School have made comments along those general lines—though I think all have taken the line you do here, that the problem is a norm of uncharitable pushback rather than being personally offended. I confess that the uncharitably-pushing-back part of my brain automatically translates that to “I am personally offended but don’t want to admit it”, in the same way as it’s proverbially always correct to translate “it’s not about the money” to “it’s about the money” :-).
(For the avoidance of doubt, I don’t in fact think that auto-translation is fair; I’m explaining how I came to make the error I did, rather than claiming it wasn’t an error.)
[EDITED to replace “explicitly” in the second paragraph with “specifically”, which is what I had actually meant to write; I think my brain was befuddled by the “explicit” in the previous paragraph. Apologies for any confusion.]
How much of this ability is needed in order to avoid taking strong mystical experiences at face value?
Not sure how to quantify this. I also haven’t had a mystical experience myself, although I have experienced mildly altered states of consciousness without the use of drugs. (Which is not at all unique to dabbling in mysticism; you can also get them from concerts, sporting events, etc.) I imagine it’s comparable to the amount of ability needed to avoid taking a strong drug experience at face value while having it (esp. since psychoactive drugs can induce mystical experiences).
In the comment I was replying to, you were saying that some rationalists are being too risk-averse. It seems like you’re now backing off a bit and just talking about yourself?
I want to make a distinction between telling people what trade-offs I think they should be making (which I mostly can’t do accurately, because they have way more information than I do about that) and telling people I think the trade-offs they’re making are too extreme (based on my limited information about them + priors). E.g. I can’t tell you how much your time is worth in terms of money, but if I see you taking on jobs that pay a dollar an hour I do feel justified in claiming that probably you can get a better deal than that.
I’m worried that the epistemic risks get stronger the further you go down this path.
Yes, this is probably true. I don’t think you need to go very far in the mystical direction per se to get the benefits I want rationalists to get. Again, it’s more that I think there are some important skills that it’s worth it for rationalists to learn, and as far as I can tell the current experts in those skills are people who sometimes use vaguely mystical language (as distinct from full-blown mystics; these people are e.g. life coaches or therapists, professionally). So I want there to not be a meme in the rationality community along the lines of “people who use mystical language are crazy and we have nothing to learn from them,” because I think people would be seriously missing out if they thought that.
We do have empirical evidence about how strong these risks are though
That’s not clear to me because of blindspots. Consider the Sequences, for example: I think we can agree that they’re in some sense psychoactive, in that people really do change after reading them. What kind of epistemic risks did we take on by doing that? It’s unclear whether we can accurately answer that question because we’ve all been selected for thinking that the Sequences are great, so we might have shared blindspots as a result. I can tell a plausible story where reading the Sequences makes your life worse in expectation, in exchange for slightly increasing your chances of saving the world.
Similarly we all grow up in a stew of culture informed by various kinds of TV, movies, etc. and whatever epistemic risks are contained in those might be hidden behind blindspots we all share too. This is one of the things I interpret The Last Psychiatrist to have been saying.
Why do you (or I, or anyone else) need mysticism (either of the sort you’ve talked about, or whatever Jordan Peterson talks about) in order to have motivation and meaning? In my experience, it is completely unnecessary to deviate even one micrometer from the path of epistemic rectitude in order to have meaning and motivation aplenty. (I, if anything, find myself with far too little time to engage in all the important, exciting projects that I’ve taken on—and there is a long queue of things I’d love to be doing, that I just can’t spare time for.)
(Perhaps partly to blame here is the view—sadly all too common in rationalist circles—that nothing is meaningful or worth doing unless it somehow “saves the world”. But that is its own problem, and said view quite deserves to be excised. We ought not compound that wrong by indulging in woo—two wrongs don’t make a right.)
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there. It should be exciting for rationalists to learn about powerful tools that carry epistemic risks, because those are precisely the tools that rationalists should be best equipped to use compared to other people! (Separately, I also think that these tools have actually improved my epistemics, by improving my ability to model myself and other people.)
You do a disservice to that last point by treating it as a mere parenthetical; it is, in fact, crucial. If the tools in question are epistemically beneficial—if they are truth-tracking—then we ought to master them and use them. If they are not, then we shouldn’t. Whether the tools in question can be used “safely” (that is, if one can use them without worsening one’s epistemics, i.e. without making one’s worldview more crazy and less correct); and, conditional on that, whether said tools meaningfully improve our grasp on reality and our ability to discover truth—is, in fact, the whole question. (To me, the answer very much seems to be a resounding “no”. What’s more, every time I see anyone—“rationalist” or otherwise—treat the question as somehow peripheral or unimportant, that “no” becomes ever more clear.)
I have said this to you twice now and I am going to keep saying it: are we talking about whether mysticism would be useful for Said, or useful for people in general? It seems to me that you keep making claims about what is useful for people in general, but your evidence continues to be about whether it would be useful for you.
I consider myself to be making a weak claim, not “X is great and everyone should do it” but “X is a possible tool and I want people to feel free to explore it if they want.” I consider you to be making a strong claim, namely “X is bad for people in general,” based on weak evidence that is mostly about your experiences, not the experiences of people other than you. In other words, from my perspective, you’ve consistently been typical minding every time we talk about this sort of thing.
I’m glad that you’ve been able to find plenty of meaning and motivation in your life as it stands, but other people, like me, aren’t so lucky, and I’m frustrated at you for refusing to acknowledge this.
You do a disservice to that last point by treating it as a mere parenthetical; it is, in fact, crucial. If the tools in question are epistemically beneficial—if they are truth-tracking—then we ought to master them and use them. If they are not, then we shouldn’t.
The parenthetical was not meant to imply that the point was unimportant, just that it wasn’t the main thrust of what I was trying to say.
I’m glad that you’ve been able to find plenty of meaning and motivation in your life as it stands, but other people, like me, aren’t so lucky, and I’m frustrated at you for refusing to acknowledge this.
Why do you say it’s luck? I didn’t just happen to find these things. It took hard work and a good long time. (And how else could it be? —except by luck, of course.)
I’m not refusing to acknowledge anything. I do not for a moment deny that you’re advocating a solution to a real problem. I am saying that your solution is a bad one, for most (or possibly even “all”) people—especially “rationalist”-type folks like you and I are. And I am saying that your implication—that this is the best solution, or maybe even the only solution—is erroneous. (And how else to take the comment that I have been lucky not to have to resort to the sort of thing you advocate, and other comments in a similar vein?)
So, to answer your question:
I have said this to you twice now and I am going to keep saying it: are we talking about whether mysticism would be useful for Said, or useful for people in general? It seems to me that you keep making claims about what is useful for people in general, but your evidence continues to be about whether it would be useful for you.
I, at least, am saying this: of course these things would not be useful for me; they would be detrimental to me, and to everyone, and especially to the sorts of people who post on, and read, Less Wrong.
Is this a strong claim? Am I very certain of it? It’s not my most strongly held belief, that’s for sure. I can imagine many things that could change my mind on this (indeed, given my background[1], I start from a place of being much more sympathetic to this sort of thing than many “skeptic” types). But what seems to me quite obvious is that in this case, firm skepticism makes a sensible, solid default. Starting from that default, I have seen a great deal of evidence in favor of sticking with it, and very little evidence (and that, of rather low quality) in favor of abandoning it and moving to something like your view.
So this is (among other reasons) why I push for specifics when people talk about these sorts of things, and why I don’t simply dismiss it as woo and move on with my life (as I would if, say, someone from the Flat Earth Society were to post on Less Wrong about the elephants which support the world on their backs). It’s an important thing to be right about. The wrong view seems plausible to many people. It’s not so obviously wrong that we can simply dismiss it without giving it serious attention. But (it seems to
me) it is still wrong—not only for me, but in general.
I am going to make one more response (namely this one) and then stop, because the experience of talking to you is painful and unpleasant and I’d rather do something else.
And I am saying that your implication—that this is the best solution, or maybe even the only solution—is erroneous.
I don’t think I’ve said anything like that here. I’ve said something like that elsewhere, but I certainly don’t mean anything like “mysticism is the only solution to the problem of feeling unmotivated” since that’s easy to disprove with plenty of counterexamples. My position is more like:
“There’s a cluster of things which look vaguely like mysticism which I think is important for getting in touch with large and neglected parts of human value, as well as for the epistemic problem of how to deal with metacognitive blind spots. People who say vaguely mystical things are currently the experts on doing this although this need not be the case in principle, and I suspect whatever’s of value that the mystics know could in principle be separated from the mysticism and distilled out in a form most rationalists would be happy with, but as far as I know that work mostly hasn’t been done yet. Feeling more motivated is a side effect of getting in touch with these large parts of human value, although that can be done in many other ways.”
(Perhaps partly to blame here is the view—sadly all too common in rationalist circles—that nothing is meaningful or worth doing unless it somehow “saves the world”.
It seems tautologous to me that if thing A is objectively more important than thing B, then,
all other things being equal, you should be doing thing A. Mysticism isn’t a good fit for the standard rationalist framing of “everything is ultimately about efficiently achieving arbitrary goals”, but a lot of other things aren’t either, and the framing itself needs justification.
It seems tautologous to me that if thing A is objectively more important than thing B, then, all other things being equal, you should be doing thing A.
This certainly sounds true, except that a) there’s no such thing as “objectively more important”, and b) even if there were, who says that “saving the world” is “objectively more important” than everything else?
Mysticism isn’t a good fit for the standard rationalist framing of “everything is ultimately about efficiently achieving arbitrary goals”, but a lot of other things aren’t either, and the framing itself needs justification.
Well I certainly I agree with you there—I am not a big fan of that framing myself—but I don’t really understand whether you mean to be disagreeing with me, here, or what. Please clarify.
Saving the world certainly does seem to be an instrumentally convergent strategy for many human terminal values. Whatever you value, it’s hard to get more of it if the world doesn’t exist. This point should be fairly obvious, and I find myself puzzled as to why you seem to be ignoring it entirely.
Please note that you’ve removed the scare quotes from “saving the world”, and thus changed the meaning. This suggests several possible responses to your comment, all of which I endorse:
It seems likely, indeed, that saving the world would be the most important thing. What’s not clear is whether ‘“saving the world”’ (as it’s used in these sorts of contexts) is the same thing as ‘saving the world’. It seems to me that it’s not.
It’s not clear to me that the framework of “the world faces concrete threats X, Y, and Z; if we don’t ‘save the world’ from these threats, the world will be destroyed” is even sensible in every case where it’s applied. It seems to me that it’s often misapplied.
If the world needs saving, is it necessary that all of everyone’s activity boil down to saving it? Is that actually the best way to save the world? It seems to me that it is not.
Holy shit, yes, thank you, this is exactly what has been motivating all of my contributions to LW 2.0. What is even the point of strengthening your epistemics if you aren’t going to then use those strong epistemics to actually do something?
I first read the Sequences 6 years ago, and since then what little world-saving-relevant effort I’ve put in has been entirely other people asking me to join in on their projects. The time I spent doing that (at SPARC, at CFAR workshops, at MIRI workshops) was great; an embarrassing amount of the time I spent not doing that (really, truly embarrassing; amounts of time only a math grad student could afford to spend) was wasted on various forms of escapism (random internet browsing, TV, video games, anime, manga), because I was sad and lonely and put a lot of effort into avoiding having to deal with that (including avoiding debugging it at CFAR workshops because it was too painful to think about). At almost no point did I have the motivation to start a project on my own, and I didn’t.
I’ve been working intently on debugging this for the last year or so and it finally more or less got solved in the beginning of February (I’ve been avoiding saying too much about this directly for the last month because I wanted to check that it’s sticking—so far so good, one month in). I did not solve this problem by getting better at epistemics, at least not in any way that would have been legible to a skeptical rationalist. I solved it by following my instincts towards Circling, Tantra workshops, the Authentic Man Program—all interventions in a cluster that rationalists are only beginning to talk about publicly. (And improving my diet, that mattered too.)
I think there is an entire class of interventions that can radically improve your motivation and your sense that your life has meaning, which actually matters even if all I’m optimizing for is x-risk reduction (which it’s not), and I really worry that 1) some rationalists are cutting themselves off from investigating these interventions because they’re, from my perspective, way too worried about epistemic risks, and 2) this position will become the LW 2.0 default. If that happens I’m going to leave.
Rationalists who are epistemically strong are very lucky: you can use that strength in a place where it will actually help you, like investigating mysticism, by defending you from making the common epistemic mistakes there. It should be exciting for rationalists to learn about powerful tools that carry epistemic risks, because those are precisely the tools that rationalists should be best equipped to use compared to other people! (Separately, I also think that these tools have actually improved my epistemics, by improving my ability to model myself and other people.)
This is an interesting idea, but how does someone tell whether they’re strong enough to avoid making the common epistemic mistakes when investigating mysticism? For example, if I practice meditation I might eventually start experiencing what Buddists call vipassana (“insight into the true nature of reality”). I don’t know if I’d be able to avoid treating those experiences as some sort of direct metaphysical knowledge as most people apparently do, as opposed to just qualia generated by my brain while it’s operating differently from normal (e.g., while in a state of transient hypofrontality).
There’s probably a number of distinct epistemic risks surrounding mysticism. Bad social dynamics in the face of asymmetric information might be another one. (Access to mystical experiences is hard to verify by third parties but tempting to claim as a marker of social status.) I don’t know how someone or some community could be confident that they wouldn’t fall prey to one of these risks.
Good question. You can test your ability to avoid mistaking strong emotions for strong beliefs in general. For example, when you get very angry at someone, do you reflexively believe that they’re a terrible person? When you get very sad, do you reflexively believe that everything is terrible? When you fall in love with someone, do you reflexively believe that they have only good qualities and no bad qualities? Etc.
I know I keep saying this, but it keeps being true: for me a lot of my ability to do this, and/or my trust in my ability to do this, came from circling, and specifically repeatedly practicing the skill of distinguishing my strong emotional reactions to what was happening in a circle from my best hypotheses about what was happening.
I can’t tell people what level of confidence they should want before trying this sort of thing, but I decided based on my instincts that the risk for me personally was low enough relative to the possible benefits that I was going to go for it, and things have been fine as far as my outside view is concerned so far, e.g. my belief in physics as standardly understood has not decreased at all.
Also, to some extent I feel like this argument proves too much. There are epistemic risks associated to e.g. watching well-made TV shows or movies, or reading persuasive writing, and rationalists take on these epistemic risks all the time without worrying about them.
How much of this ability is needed in order to avoid taking strong mystical experiences at face value?
In the comment I was replying to, you were saying that some rationalists are being too risk-averse. It seems like you’re now backing off a bit and just talking about yourself?
I’m worried that the epistemic risks get stronger the further you go down this path. Have you had any mystical experiences similar to vipassana yet? If not, your continuing belief in physics as standardly understood does not seem to address my worry.
We do have empirical evidence about how strong these risks are though, and the epistemic risks associated with “investigating mysticism” seem much stronger than those associated with watching well-made TV shows and movies, or reading persuasive writing. There are other activities, for example joining a church (let’s say for the social benefits), that I think have epistemic risks more comparable with investigating mysticism, and rationalists do worry about them.
P.S., After writing the above I viewed some interviews of Jeffery Martin (author of the PNSE paper previously discussed here), and he comes across as not being obviously irrational or epistemically corrupted by his investigations into mystical experiences. For example he claims to have gone through PNSE locations 1 through 4, but unlike the research subjects that he described in the paper, does not seem to have (or managed to overcome) a “tremendous sense of certainty that participants were experiencing a ‘deeper’ or ‘more true’ reality”. However he does seem to consistently oversell the benefits of PNSE relative to his own descriptions in the paper (which I infer from his interviews was largely if not completely written before he “transitioned” to PNSE himself), and this makes me think he still got corrupted in a more subtle way.
I’ve only taken a few steps down the path that Qiaochu is following, but I have a few thoughts regarding epistemic risk-management:
If you’re ever going to investigate any altered-consciousness experiences at all, you’re going to have to take a risk. You can never be 100% sure that something is “epistemically safe”: certainty is impossible and time is limited.
There is clearly an efficient frontier of risk/reward tradeoffs. I’m also a fan of circling, which doesn’t ask you to accept any supernatural claims or dogmas and is incredibly useful for understanding the landscape of human minds. A few circling sessions with seriously strange people can do a lot to cure one of typical mind fallacy. On the other hand, joining Scientology the same week you start experimenting with ayahuasca is probably unwise.
As a community, we can reduce risk by diversifying. Some of us will do LSD, some will do vipassana, some will circle, some will listen to 100 hours of Peterson… We should be able to notice if any particular subgroup are losing their minds. The real danger would occur if all of us suddenly started doing the same thing with no precautions.
What would it look like, if we noticed that a particular subgroup was beginning to lose its mind? I think it might look like a few unusually-rude people calling into question the alleged experiences of that particular subgroup and asking pointed questions about exactly what had happened to them and exactly why they thought themselves better off for it; and like the members of that particular subgroup responding with a combination of indignation and obfuscation: “we’ve definitely been changed for the better, but of course we can’t expect you to understand what it’s like if it hasn’t happened to you, so why do you keep pushing us for details you know we won’t be able to give?” / “I find it very discouraging to get this sort of response, and if it keeps happening I’m going to leave”; and like some of the more community-minded folks objecting to the rudeness of the questioners, observing acerbically that it always seems to be the same people asking those rude questions and wondering whether the emperor really has any clothes, and maybe even threatening to hand out bans.
All of which sounds kinda familiar.
I don’t actually think that … let’s call it the Berkeley School of rationality, though I’m not sure what fraction of its members are actually in Berkeley … is really losing its mind. (I’m not 100% sure it isn’t, though.) And, for the avoidance of doubt, I think it would be a damn shame if LW lost the people who have made noises about possibly leaving if the local community is too rude to them. -- But if it were losing its mind, I think that might well look roughly the way things currently look.
Which, I think, means: maybe we can’t safely assume that if a particular subgroup was losing its mind then we’d notice and take the actions needed for epistemic safety. Because we’re (almost certainly correctly) not rising up and throwing out the Berkeleyans right now, nor would we (probably correctly) even if they got a couple of notches weirder than they are now … but by that point, if they were losing their minds, they would surely be posing a genuine epistemic threat to at least some of the rest of us.
gjm, point well taken. I wonder if it would be easier for people inside or outside Berkeley to spot if anyone there is seriously going off the rails and say something about it.
Anyway, I do want to elaborate a little bit on my “Efficient Frontier” idea. If anyone can build a map of which “mystical experiences” are safe/dangerous/worthwhile/useless and for whom, it should be people like us. I think it’s a worthwhile project and it has to be done communally, given how different each person’s experience may be and how hard it is to generalize.
The main example here is Sam Harris, a hardcore epistemic rationalist who has also spent a lot of time exploring “altered states of consciousness”. He wrote a book about meditation, endorses psychedelics with caveats, is extremely hostile to any and all religions, and probably thinks that Peterson is kinda crazy after arguing with him for four hours. Those are good data points, but we need 20 more Sam Harrises. I’m hoping that LW can be the platform for them.
Perhaps we need to establish some norms for talking about “mystical experiences”, fake frameworks, altered consciousness etc. so that people feel safe both talking and listening.
There’s Daniel Ingram, Vincent Horn, Kenneth Folk and the other Buddhist geeks.
I was triggered by this initially, but I reread it and you’re making a completely reasonable point. I notice I’m still concerned about the possibility that your reasonable point / motte will be distorted into a less reasonable point / bailey.
That is not what I said. What I said is that if the pushback I’ve been getting becomes the default on LW 2.0, then I’m going to leave. This is a matter of people deciding what kind of place they want LW 2.0 to be. If they decide that LW 2.0 does not want to be the place for the things I want to talk about, then I’m going to respect that and talk about those things somewhere else. Staying would be unpleasant for everyone involved.
I concede the point. We can try asking what kinds of externally verifiable evidence would distinguish this world from a world in which people like Val and I have been talking about real things which we lack the skill to explain (in a way satisfying to skeptical rationalists) via text. One prediction I’m willing to make is that I’m now more capable of debugging a certain class of thorny emotional bugs, so e.g. I’m willing to predict that over the next few years I’ll help people debug such bugs at CFAR workshops and wherever else, and that those people will at least in expectation be happier, more productive, more willing to work on x-risk or whatever they actually want to do instead, less likely to burn out, etc.
(But, in the interest of trying to be even-handed about possible hypotheses that explain the current state of public evidence, it’s hard to distinguish the above world from a world in which people like Val and I are losing our minds and also becoming more charismatic / better at manipulation.)
I think that perhaps what bothers a lot of rationalists about your (or Valentine’s) assertions is down to three factors:
You don’t tend to make specific claims or predictions. I think you would come off better—certainly to me and I suspect to others—if you were to preregister hypotheses more, like you did in the above comment. I believe that you could and should be more specific, perhaps stating that over a six month period you expect to work n more hours without burning out or that a consensus of reports from outsiders about your mental well-being will show a marked positive change during a particular time period that the evaluators did not know was special. While these would obviously not constitute strong evidence, a willingness to informally test your ideas would at least signal honest belief.
You seem to make little to no attempt to actually communicate your ideas in words, or even define your concepts in words. Frankly, it continues to strike me as suspicious that you claim difficulty in even analogizing or approximating your ideas verbally. Even something as weak as the rubber-sheet analogy for General Relativity would—once again—signal an honest attempt.
There doesn’t seem to be consistency on the strength of claims surrounding frameworks. As mentioned elsewhere in thread, Valentine seems to claim that mythic mode generated favorable coincidences like he was bribing the DM. Yet other times Valentine seems to stay acknowledge that the narrative description of reality is at best of metaphorical use.
I think that given recent rationalist interest in meditation, fake frameworks, etc., and in light of what seems to be a case of miscommunication and/or under-communication, there should be some attempt to establish a common basis of understanding, so that if someone asks, “Are you saying x?” they can be instantly redirected to a page that gives the relevant definitions and claims. If you view this is as impossible, do you think that that is a fact of your map or of the relevant territory?
Anyway, I really hope everyone can reach a point of mutual intelligibility, if nothing else.
I have several different responses to this which I guess I’ll also number.
Sure, fine, I’m willing to claim this. Everyone who has interacted with me both in the last month and, say, a year ago will tell you that I am visibly happier and doing more of what I actually want (“productive” can be a loaded term). People can ask Anna, Duncan, Lauren, etc. if they really want. I can also self-report that I’ve engaged in much less escapism (TV, movies, video games, etc.) this month than in most months of the last 5 years, and what little I have engaged in was mostly social.
I would love to be having this conversation; if the responses I’ve been getting had been of the form “hey, you seem to be making these interesting and non-obvious claims, what evidence do you have / what are your models / what are predictions you’re willing to make?” then I would’ve been happy to answer, but instead the responses were of the form “hey, have you considered the possibility that you’re evil?” I have a limited budget of time and attention I’m willing to spend on LW and my subjective experience is that I’ve been spending it putting out fires that other people have been starting. Please, I would love to have the nice conversation where we share evidence and models and predictions while maintaining principle of charity, but right now I mostly don’t have enough trust that I won’t be defected on to do this.
You’ll notice I haven’t written a top-level post about any of these topics. That’s precisely because I’m not yet willing to put in the time and effort necessary to get it up to epistemic snuff. I didn’t want to start this conversation yet; it was started by others and I felt a duty to participate in order to prevent people from idea inoculating against this whole circle of ideas.
This seems like an unfair conflation of what happened in the Kensho post and everything else. The Circling post was entirely an attempt to communicate in words! All of these comments are attempts to communicate in words!
This is exactly what the cell phone analogy in the Kensho post was for, although I also don’t want people to bucket me and Val too closely here; I’m willing to make weaker claims that I think I can explain more clearly, but haven’t done so yet for the reasons described above.
I warned Val that people would be unhappy about this. Here is one story I currently find plausible for explaining at least one form of synchronicity: operating in mythic mode is visible to other humans on some level, and causes them to want to participate in the myth that it looks like you’re in. So humans can sometimes collaboratively generate coincidences as if they were playing out an improv scene, or something. (Weak belief weakly held.)
As for consistency, it’s partly a matter of what level of claim I or anyone else is willing to defend in a given conversation. It may be that my true belief is strong belief A but that I expect it will be too difficult to produce a satisfying case for why I believe A (and/or that I believe that attempting to state A in words will cause it to be misinterpreted badly, or other things like that), so in the interest of signaling willingness to cooperate in the LW epistemic game, I mostly talk about weaker belief A’, which I can defend more easily, but maybe in another comment I instead talk about slightly weaker or slightly stronger belief A″ because that’s what I feel like I can defend that day. Do you really want to punish me for not consistently sticking to a particular level of weakening of my true belief?
I think it’s very difficult because of long experiential distances. This is to some extent a fact about my lack of skill and to some extent what I see as a fact about how far away some parts of the territory are from the experience of many rationalists.
Overall, from my point of view there’s a thing that’s happening here roughly analogous to the Hero Licensing dialogue; if I spent all my time defending myself on LW like this instead of just using what I believe my skills to be to do cool stuff, then I won’t ever get around to doing the cool stuff. So at some point I am just going to stop engaging in this conversation, especially if people continue to assume bad faith on the part of people like me and Val, in order to focus my energy and attention on doing the cool stuff.
(This is my second comment on this site, so it is probable that the formatting will come out gross. I am operating on the assumption that it is similar to Reddit, given Markdown)
To be as succinct as possible, fair enough.
I want to have this conversation too! I was trying to express what I believe to be the origins of people’s frustrations with you, not to try to discourage you. Although I can understand how I failed to communicate that.
I am going to wrap this up with the part of your reply that concerns experiential distance and respond to both. I suspect that a lot of fear of epistemic contamination comes from the emphasis on personal experience. Personal (meatspace) experiences, especially in groups, can trigger floods of emotions and feelings of insights without those first being fed through rational processing. Therefore it seems reasonable to be suspicious of anyone who claims to teach through personal experience. That being said, the experimental spirit suggests the following course of action: get a small group and try to close their experiential gap gradually, while having them extensively document anything they encounter on the way, then publish that for peer analysis and digestion. Of course that relies on more energy and time than you might have.
On a general level, I totally concede that I am operating from relatively weak ground. It has been a while—or at least felt like a while—since I read any of the posts I mentioned (tacitly or otherwise) with the exception of Kensho, so that is definitely coloring my vision.
I acknowledge that many people are responding to your ideas with unwarranted hostility and forcing you onto the defensive in a way that I know must be draining. So I apologize for essentially doing that in my original reply to you. I think that I, personally, am unacceptably biased against a lot of ideas due to their “flavor” so to speak, rather than their actual strength.
As to consistency, I actually do want to hold you to some standard of strength with respect to beliefs, because otherwise you could very easily make your beliefs unassuming enough to pass through arbitrary filters. I find ideas interesting; I want to know A, not any of its more easily defensible variants. But I don’t want to punish you or do anything that could even be construed as such.
In summary, I am sorry that I came off as harsh.
EDIT: Fixed terrible (and accidental) bolding.
I recognize the concern here, but you can just have the System 1 experience and then do the System 2 processing afterwards (which could be seconds afterwards). It’s really not that hard. I believe that most rationalists can handle it, and I certainly believe that I can handle it. I’m also willing to respect the boundaries of people who don’t think they can handle it. What I don’t want is for those people to typical mind themselves into assuming that because they can’t handle it, no one else can either, and so the only people willing to try must be being epistemically reckless.
There are plenty of completely mundane skills that can basically only be taught in this way. Imagine trying to teach someone how to play basketball using only text, etc. There’s no substitute for personal experience in many skills, especially those involving the body, and in fact I think this should be your prior. It may not feel like this is the prior but I think this is straight up a mistake; I’d guess that people’s experiences with learning skills here are skewed by 1) school, which heavily skews towards skills that can be learned through text, and 2) the selection effect of being LWers, liking the Sequences, etc. There’s a reason CFAR focuses on in-person workshops instead of e.g. blog posts or online videos.
Thank you.
Unfortunately my sense is strongly that other people will absolutely punish me for expressing A instead of any of its weaker variants—this is basically my story about what happened to Val in the Kensho post, where Val could have made a weaker and more defensible point (for example, by not using the word “enlightenment”) and chose not to—precisely because my inability to provide a satisfying case for believing A signals a lack of willingness to play the LW epistemic game, which is what you were talking about earlier.
(Umeshism: if you only have beliefs that you can provide a satisfying case for believing on LW, then your beliefs are optimized too strongly for defensibility-on-LW as opposed to truth.)
So I’m just not going to talk about A at all, in the interest of maintaining my cooperation signals. And given that, the least painful way for me to maintain consistency is to not talk about any of the weaker variants either.
It is probably true that most rationalists could handle it. It is also probably true, however, that people who can’t handle it could end up profoundly worse for the experience. I am not sure we should endorse potential epistemic hazards with so little certainty about both costs and benefits. I also grant that anything is a potential epistemic hazard and that reasoning under uncertainty is kind of why we bother with this site in the first place. This is all to say that I would like to see more evidence of this calculation being done at all, and that if I was not so geographically separated from the LWsphere, I would like to try these experiences myself.
I am not sure that it should be the prior for mental skills however. As you pointed out, scholastic skills are almost exclusively (and almost definitionally) attainable through text. I know that I can and have learned math, history, languages, etc., through reading, and it seems like that is the correct category for Looking, etc., as well (unless I am mistaken about the basic nature of Looking, which is certainly possible).
This is a sad circumstance, I wish it were otherwise, and I understand why you have made the choice you have considering the (rather ironically) immediate and visceral response you are used to receiving.
I’m not sure what “endorse” means here. My position is certainly not “everyone should definitely do [circling, meditation, etc.]”; mostly what I have been arguing for is “we should not punish people who try or say good things about [circling, meditation, etc.] for being epistemically reckless, or allege that they’re evil and manipulative solely on that basis, because I think there are important potential benefits worth the potential risks for some people.”
I still think you’re over-updating on school. For example, why do graduate students have advisors? At least in fields like pure mathematics that don’t involve lab work, it’s plausibly because being a researcher in these fields requires important mental skills that can’t just be learned through reading, but need to be absorbed through periodic contact with the advisor. Great advisors often have great students; clearly something important is being transmitted even if it’s hard to write down what.
My understanding of CFAR’s position is also that whatever mental skills it tries to teach, those skills are much harder to teach via text or even video than via an in-person workshop, and that this is why we focus so heavily on workshops instead of methods of teaching that scale better.
I know, right? Also ironically, learning how to not be subject to my triggers (at least, not as much as I was before) is another skill I got from circling.
I’m glad you got over the initial triggered-ness. I did wonder about being even more explicit that I don’t in fact think you guys are losing your minds, but worried about the “lady doth protest too much” effect.
I wasn’t (in case it isn’t obvious) by any means referring specifically to you, and in particular the “if it keeps happening I’m going to leave” wasn’t intended to be anything like a quotation from you or any specific other person. It was intended to reflect the fact that a number of people (I think at least three) of what I called the Berkeley School have made comments along those general lines—though I think all have taken the line you do here, that the problem is a norm of uncharitable pushback rather than being personally offended. I confess that the uncharitably-pushing-back part of my brain automatically translates that to “I am personally offended but don’t want to admit it”, in the same way as it’s proverbially always correct to translate “it’s not about the money” to “it’s about the money” :-).
(For the avoidance of doubt, I don’t in fact think that auto-translation is fair; I’m explaining how I came to make the error I did, rather than claiming it wasn’t an error.)
[EDITED to replace “explicitly” in the second paragraph with “specifically”, which is what I had actually meant to write; I think my brain was befuddled by the “explicit” in the previous paragraph. Apologies for any confusion.]
Not sure how to quantify this. I also haven’t had a mystical experience myself, although I have experienced mildly altered states of consciousness without the use of drugs. (Which is not at all unique to dabbling in mysticism; you can also get them from concerts, sporting events, etc.) I imagine it’s comparable to the amount of ability needed to avoid taking a strong drug experience at face value while having it (esp. since psychoactive drugs can induce mystical experiences).
I want to make a distinction between telling people what trade-offs I think they should be making (which I mostly can’t do accurately, because they have way more information than I do about that) and telling people I think the trade-offs they’re making are too extreme (based on my limited information about them + priors). E.g. I can’t tell you how much your time is worth in terms of money, but if I see you taking on jobs that pay a dollar an hour I do feel justified in claiming that probably you can get a better deal than that.
Yes, this is probably true. I don’t think you need to go very far in the mystical direction per se to get the benefits I want rationalists to get. Again, it’s more that I think there are some important skills that it’s worth it for rationalists to learn, and as far as I can tell the current experts in those skills are people who sometimes use vaguely mystical language (as distinct from full-blown mystics; these people are e.g. life coaches or therapists, professionally). So I want there to not be a meme in the rationality community along the lines of “people who use mystical language are crazy and we have nothing to learn from them,” because I think people would be seriously missing out if they thought that.
That’s not clear to me because of blindspots. Consider the Sequences, for example: I think we can agree that they’re in some sense psychoactive, in that people really do change after reading them. What kind of epistemic risks did we take on by doing that? It’s unclear whether we can accurately answer that question because we’ve all been selected for thinking that the Sequences are great, so we might have shared blindspots as a result. I can tell a plausible story where reading the Sequences makes your life worse in expectation, in exchange for slightly increasing your chances of saving the world.
Similarly we all grow up in a stew of culture informed by various kinds of TV, movies, etc. and whatever epistemic risks are contained in those might be hidden behind blindspots we all share too. This is one of the things I interpret The Last Psychiatrist to have been saying.
Why do you (or I, or anyone else) need mysticism (either of the sort you’ve talked about, or whatever Jordan Peterson talks about) in order to have motivation and meaning? In my experience, it is completely unnecessary to deviate even one micrometer from the path of epistemic rectitude in order to have meaning and motivation aplenty. (I, if anything, find myself with far too little time to engage in all the important, exciting projects that I’ve taken on—and there is a long queue of things I’d love to be doing, that I just can’t spare time for.)
(Perhaps partly to blame here is the view—sadly all too common in rationalist circles—that nothing is meaningful or worth doing unless it somehow “saves the world”. But that is its own problem, and said view quite deserves to be excised. We ought not compound that wrong by indulging in woo—two wrongs don’t make a right.)
You do a disservice to that last point by treating it as a mere parenthetical; it is, in fact, crucial. If the tools in question are epistemically beneficial—if they are truth-tracking—then we ought to master them and use them. If they are not, then we shouldn’t. Whether the tools in question can be used “safely” (that is, if one can use them without worsening one’s epistemics, i.e. without making one’s worldview more crazy and less correct); and, conditional on that, whether said tools meaningfully improve our grasp on reality and our ability to discover truth—is, in fact, the whole question. (To me, the answer very much seems to be a resounding “no”. What’s more, every time I see anyone—“rationalist” or otherwise—treat the question as somehow peripheral or unimportant, that “no” becomes ever more clear.)
I have said this to you twice now and I am going to keep saying it: are we talking about whether mysticism would be useful for Said, or useful for people in general? It seems to me that you keep making claims about what is useful for people in general, but your evidence continues to be about whether it would be useful for you.
I consider myself to be making a weak claim, not “X is great and everyone should do it” but “X is a possible tool and I want people to feel free to explore it if they want.” I consider you to be making a strong claim, namely “X is bad for people in general,” based on weak evidence that is mostly about your experiences, not the experiences of people other than you. In other words, from my perspective, you’ve consistently been typical minding every time we talk about this sort of thing.
I’m glad that you’ve been able to find plenty of meaning and motivation in your life as it stands, but other people, like me, aren’t so lucky, and I’m frustrated at you for refusing to acknowledge this.
The parenthetical was not meant to imply that the point was unimportant, just that it wasn’t the main thrust of what I was trying to say.
Why do you say it’s luck? I didn’t just happen to find these things. It took hard work and a good long time. (And how else could it be? —except by luck, of course.)
I’m not refusing to acknowledge anything. I do not for a moment deny that you’re advocating a solution to a real problem. I am saying that your solution is a bad one, for most (or possibly even “all”) people—especially “rationalist”-type folks like you and I are. And I am saying that your implication—that this is the best solution, or maybe even the only solution—is erroneous. (And how else to take the comment that I have been lucky not to have to resort to the sort of thing you advocate, and other comments in a similar vein?)
So, to answer your question:
I, at least, am saying this: of course these things would not be useful for me; they would be detrimental to me, and to everyone, and especially to the sorts of people who post on, and read, Less Wrong.
Is this a strong claim? Am I very certain of it? It’s not my most strongly held belief, that’s for sure. I can imagine many things that could change my mind on this (indeed, given my background[1], I start from a place of being much more sympathetic to this sort of thing than many “skeptic” types). But what seems to me quite obvious is that in this case, firm skepticism makes a sensible, solid default. Starting from that default, I have seen a great deal of evidence in favor of sticking with it, and very little evidence (and that, of rather low quality) in favor of abandoning it and moving to something like your view.
So this is (among other reasons) why I push for specifics when people talk about these sorts of things, and why I don’t simply dismiss it as woo and move on with my life (as I would if, say, someone from the Flat Earth Society were to post on Less Wrong about the elephants which support the world on their backs). It’s an important thing to be right about. The wrong view seems plausible to many people. It’s not so obviously wrong that we can simply dismiss it without giving it serious attention. But (it seems to me) it is still wrong—not only for me, but in general.
[1] No, it’s not religion.
I am going to make one more response (namely this one) and then stop, because the experience of talking to you is painful and unpleasant and I’d rather do something else.
I don’t think I’ve said anything like that here. I’ve said something like that elsewhere, but I certainly don’t mean anything like “mysticism is the only solution to the problem of feeling unmotivated” since that’s easy to disprove with plenty of counterexamples. My position is more like:
“There’s a cluster of things which look vaguely like mysticism which I think is important for getting in touch with large and neglected parts of human value, as well as for the epistemic problem of how to deal with metacognitive blind spots. People who say vaguely mystical things are currently the experts on doing this although this need not be the case in principle, and I suspect whatever’s of value that the mystics know could in principle be separated from the mysticism and distilled out in a form most rationalists would be happy with, but as far as I know that work mostly hasn’t been done yet. Feeling more motivated is a side effect of getting in touch with these large parts of human value, although that can be done in many other ways.”
It seems tautologous to me that if thing A is objectively more important than thing B, then, all other things being equal, you should be doing thing A. Mysticism isn’t a good fit for the standard rationalist framing of “everything is ultimately about efficiently achieving arbitrary goals”, but a lot of other things aren’t either, and the framing itself needs justification.
This certainly sounds true, except that a) there’s no such thing as “objectively more important”, and b) even if there were, who says that “saving the world” is “objectively more important” than everything else?
Well I certainly I agree with you there—I am not a big fan of that framing myself—but I don’t really understand whether you mean to be disagreeing with me, here, or what. Please clarify.
Saving the world certainly does seem to be an instrumentally convergent strategy for many human terminal values. Whatever you value, it’s hard to get more of it if the world doesn’t exist. This point should be fairly obvious, and I find myself puzzled as to why you seem to be ignoring it entirely.
Please note that you’ve removed the scare quotes from “saving the world”, and thus changed the meaning. This suggests several possible responses to your comment, all of which I endorse:
It seems likely, indeed, that saving the world would be the most important thing. What’s not clear is whether ‘“saving the world”’ (as it’s used in these sorts of contexts) is the same thing as ‘saving the world’. It seems to me that it’s not.
It’s not clear to me that the framework of “the world faces concrete threats X, Y, and Z; if we don’t ‘save the world’ from these threats, the world will be destroyed” is even sensible in every case where it’s applied. It seems to me that it’s often misapplied.
If the world needs saving, is it necessary that all of everyone’s activity boil down to saving it? Is that actually the best way to save the world? It seems to me that it is not.