Nate Soares’ Life Advice
Disclaimer: Nate gave me some life advice at EA Global; I thought it was pretty good, but it may or may not be useful for other people. If you think any of this would be actively harmful for you to apply, you probably shouldn’t.
Notice subtle things in yourself
This includes noticing things like confusion, frustration, dissatisfaction, enjoyment, etc. For instance, if you’re having a conversation with somebody and they’re annoying you, it’s useful to notice that you’re getting a little frustrated before the situation gets worse.
A few weeks ago my colleagues and I wanted to do something fun, and decided to play laser tag at our workplace. However, we couldn’t find the laser tag guns. As I began to comb the grounds for the guns for the second time I noticed that I felt like I was just going through the motions, and didn’t really expect my search to be fruitful. At this point I stopped and thought about the problem, and realized that I had artificially constrained the solution space to things that would result in us playing laser tag at the office, rather than things that would result in us having fun. So I stopped looking for the guns and we did an escape room instead, which made for a vastly more enjoyable evening.
If you’re not yet at the point where you can notice unsubtle things in yourself, you can start by working on that and move up from there.
Keep doing the best thing, even if you don’t have a legible story for why it’s good
Certainly the actions you’re taking should make sense to you, but your reasoning doesn’t have to be 100% articulable, and you don’t need to justify yourself in an airtight way. Some things are easier to argue than other things, but this is not equivalent to being more correct. For instance, I’m doing AI alignment stuff, and I have the option of reading either a textbook on linear algebra or E.T. Jaynes’ probability theory textbook.
Reading about linear algebra is very easy to justify in a way that can’t really be disputed; it’s just obviously true that linear algebra is directly and widely applicable to ML. It’s harder to justify reading Jaynes to the same level, even though I think it’s a pretty sound thing to do (I think I will become better at modeling the world, learn about various statistical pitfalls, absorb Jaynes’ philosophical and historical insights, etc.), and in fact a better use of my time right now than learning linear algebra in more depth.
This bit of advice is mostly about not needing to be able to justify yourself to other people (e.g. friends, family) to take the best visible action. However, it is also the case that you might have internalized social pressure such that you feel the need to justify a course of action to yourself in a way that would be legible to other people/justifiable in a social setting. This is also unnecessary.
Relatedly, you don’t need to “get” motivation; you can just continue to take the best action you can see.
Don’t go insane
Apparently a good number of people in Nate’s social circle have gone insane—specifically, they have taken facts about the world (e.g. the universal prior being malign) as “invitations” to go insane. He also noted that many of these people took LSD prior to going insane, and that this may have “loosened” something in their minds.
This may be a particular danger for people who value taking ideas seriously as a virtue, because they might go full throttle on an idea that conflicts with common sense, and end up insane as a result. When asking a non-Nate for feedback on this post, I was told that some concrete things that people have taken as “invitations” to go insane are: decision theory (specifically acausal trade), things thought while meditating, and the idea that minds are made of “parts” (e.g. sub-agents).
Nate says that the way you avoid this pitfall is that when you hear the “siren call of insanity,” you choose to stay sane instead. This seems vaguely reasonable to me, but it’s not very crisp in my mind and I don’t quite know what it looks like to apply this in practice.
Reject false dichotomies
Don’t epistemically commit to the best option you can currently see, especially not in a way that would permanently alter you/prevent you from backtracking later. For instance, if the only two moral philosophies you’re aware of are Christianity and nihilism, and you decide that God doesn’t actually exist (and therefore Christianity is obviously wrong), you don’t have to go full throttle down the nihilism path.
Don’t throw the baby out with the bathwater—in fact, don’t lose any part of the baby. If all the epistemic options seem to violate something important to you, don’t just blast through that part of your values. Apparently this helps with not going insane.
Research advice
Nate told me that the most important skill for doing research is not thinking you know things when you actually don’t. This is closely tied to noticing confusion. It’s also related to “learning (important) things carefully”; for instance, if you’re teaching yourself physics, you want to make sure you truly understand the material you’re learning, and move at a pace such that you can do that (rather than going through it quickly but haphazardly).
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22’) by 30 Aug 2022 1:37 UTC; 144 points) (EA Forum;
- EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22′) by 30 Aug 2022 1:42 UTC; 57 points) (
- Some rules for life (v.0,0) by 17 Aug 2023 0:43 UTC; 38 points) (
- 25 Aug 2022 3:40 UTC; 1 point) 's comment on Quadratic Reciprocity’s Quick takes by (EA Forum;
I think I might just commit to staying away from LSD and Mind Illuminated style meditation entirely. Judging by the frequency of word of mouth accounts like this, the chance of going a little or a lot insane while exposed to them seems frighteningly high.
I wonder why these long term effects seem relatively sparsely documented. Maybe you have to take the meditation really seriously and practice diligently for this stuff to have a high chance of happening, and people in this community do that often, but the average study population doesn’t?
There can also be factors in this community that make people both unusually likely to go insane and to also try things like meditation and LSD in an attempt to help themselves. It’s a bit hard to say given that the post is so vague on what exactly “insanity” means, but the examples of acausal trade etc. make me suspect that it’s related to a specific kind of anxiety which seems to be common in the community.
That same kind of anxiety also made me (temporarily) go very slightly crazy many years ago, when I learned about quantum mechanics (and I had neither done psychedelics nor had I yet started meditating at the time), and it feels like the same kind of thing that causes the occasional person to freak out about Roko’s Basilisk. I think those kinds of people are particularly likely to be drawn to LW, because they subconsciously see rationality as a way to try to control their anxiety, and that same thing causes them to seek out psychedelics and meditation. And then rationality, meditation, and psychedelics are all things that might also dismantle some of the existing defenses their mind has against that anxiety.
I suspect it’s related to the fact that we’ve gotten ourselves off-distribution from the emergencies that used to be common, and thus AI and the Singularity are interpreted as immediate emergencies when they aren’t.
I’ll also make a remark that LW focuses on the tails, so things tend to be more extreme than usual.
Yeah, I think people who are high in abstract thinking and believing their beliefs and anxious thought patterns should really stay away from psychedelics and from leaning too hard into their run-away thought trains. Also, try to stay grounded with people and activities that don’t send you off into abstract thought space. Spend some time with calm normal people who look at the world in straightforward ways, not only creative wild thinkers. Spend time doing hobbies outdoors that use your physical body and attention in satisfying ways, keeping you engaged enough to stay out of your head.
I think one who this description fits can avoid any risks of ‘going insane’ while still using their abilities for good. For example, in my own case (I think the first two describe me, and the third one sort-of does), if I were to apply these suggestions..
then my creative output related to alignment would probably drop significantly.
(I agree with not trying psychedelics though. Even e.g nootropics and adhd meds are things I’m really cautious with, cause I don’t wanna mess up some part of my process.)
For anyone reading this post in the future, I’d instead suggest doing things meant to help you channel your ability: being conscious and reflective about your thoughts, revisiting basic rationality techniques and theory occasionally, noticing privileged hypotheses (while still allowing yourself to ponder them if you’re just doing it because you find it interesting; I think letting ones mind explore is also important to generating important ideas and making connections).
“Please don’t throw your mind away” in this other sense of counteracting your tendency to think abstractly; you might be able to do a lot of good with it.
I think your suggestions are good as well. To be clear: I didn’t mean that I think one should spend a large fraction of their time just ‘staying grounded’. More like, a few hours a week.
The way I model attention is that it is (metaphorically) a Cirrus (biology) of thought that you extend into the world and then retract into your mind. If you leave it out for too long, it gets tangled up in the forest of all knowledge, if you keep it inside for too long, then you become unable to respond to your environment.
People who are extremely online tend to send their attention cirrus into the internet, where it is prone to become a host to memes that use addiction to bypass your mind’s typical defenses against infection.
Anything that you really enjoy to the point of losing self-control comes under the category of being a disease: whether that’s social media, programming, fiction, gaming, tentacle pornography, research, or anime.
Even if they were somehow extremely beneficial normally (which is fairly unlikely), any significant risk of going insane seems much too high. I would posit they have such a risk for exactly the same reason -when using them, you are deliberately routing around very fundamental safety features of your mind.
The MBSR studies are two-month interventions. They are not going to have the same strong effects as people meditating seriously for years.
On the other hand, those studies that investigate people who meditate a lot are often from a monastic setting where people have teachers which is quite different from someone meditating without a teacher and orienting themselves with the Mind Illuminated.
Possible selection effect?
Maybe meditation moves people in a random direction. Those who get hurt, mostly stop meditating, so you won’t find many of them in the “meditating seriously for years” group.
I find it ironic that in a community that values clear thinking many people do things with their brains similar to giving their computer a hard kick or setting it on fire and expecting that to improve its performance.
It’s the fucking impulsive contrarianism, isn’t it? The more people keep telling you about those who have ruined their lives by taking drugs, the more certain you feel that you will do it the right way that will magically give you mental superpowers, unlike all those idiots who were simply doing it wrong. Induction is for losers. Also, you are smarter than everyone, and you did your “reseach” on internet, or asked a friend.
I think the long-term effects of LSD and other drugs are documented sufficiently. It’s just, if there are 100 boring statistics about people who fucked up their lives, and 1 exciting speculative book by Timothy Leary, everyone will talk about the latter.
Ah, you meant meditation. I guess the standard excuse is that people who got hurt by meditation were either doing it wrong, or they had some extremely rare pre-existing condition. (Translated: no matter how many people get hurt by doing X, it obviously does not apply to me. Because they were stupid and they were doing it wrong, and I am smart and I will be doing it right. Also, they were weak, and I am invulnerable.) The same excuse applies to people who join an MLM pyramid scheme and lose their money, or people who pray to God to cure their sick child and then the child dies anyway. The theory is always correct; if it doesn’t work for you, you were clearly applying it incorrectly.
How many more people have to die, before we learn the thing that a random 10 years old kid could tell us?
I think that before you write strongly-worded comments accusing people of being idiots for privileging anecdotes more heavily than statistics, you should first establish that the side you’re taking is actually the one supported by statistics and that it’s the other side which is relying on anecdotes, and not vice versa.
My read is that for meditation and psychedelics, the actual research tends to show that they are generally low-risk/beneficial (even to the point of their mental health benefits starting to gradually overcome the stigma against psychedelics among academic researchers) [e.g. 1, 2, 3 for psychedelics] and it’s actually the bad cases that are the unrepresentative anecdotes.
How unrepresentative? What probability of becoming the “bad case” would you consider acceptable?
If there is a hypothetical number of bad cases that would make you change your mind if all those bad cases happened inside the rationality community, how big approximately would that number be?
Acceptable for writing highly derisive comments about people who try psychedelics? I’m not really a fan of that approach in any case, tbh.
Acceptable for psychedelics being worth trying? I don’t know, that seems like it would depend on the person’s risk tolerance and what they’re hoping to get out of it. I don’t consider it my business to decide e.g. what level of risk is unacceptable if someone wants to try extreme sports, nor do I consider it my business to tell people at what risk level they are allowed to try out psychedelics.
I’m more in favor of talking about the possible risks honestly and openly but without exaggeration, and also talking about responsible use, how to ameliorate the risks, and what the possible risk factors are.
The point of Kaj Sotala’s comment is that there is a selection bias that is severe enough that your comments need to have major caveats to them (deepthoughtlife made a similar error.) I won’t determine your risk tolerance for medicine, but what I can say is that we should update in the opposite direction: That psychedelics are safe and maybe useful for the vast majority of people, and the ones that were truly harmed are paraded as anecdotes, showing massive selection biases, and not representing the median person in the world.
Suppose you start taking LSD. Not as a part of a scientific experiment where the dosage was reviewed and approved by a research ethics board, but based on a recommendation of your friend and an internet research you did yourself, using doses as big as your friend/research recommends, repeating as often as your friend/research considers safe.
(Maybe, let’s also include the risk of self-modification, e.g. the probability that once you overcome the taboo and find the results of the experiment appealing, you may be tempted to try a greater dose the next time, or increase the frequency. I am mentioning this, because—yes, anecdotally—people experimenting with psychoactive substances sometimes do exactly this.)
Are you saying that the probability of serious and irreversible harm to your brain is smaller than 1%?
Or are you saying that the potential benefits are so large, that the 1% chance of seriously and irreversibly harming your brain is totally worth it?
I think that at least one of these two statements needs to be true, in order to make experimenting with LSD worth it. I just don’t know which one (or possibly both?) are you making.
Note that the 1% probability of hurting your brain (heck, even 40% probability) is still hypothetically compatible with the statement that for a median person the experiment is a net benefit.
I suspect a large part of the problem is LW is trying to find answers to a field that has no reliable results, improving minds (aka nootropics.) We know that existing drugs at most give you emotional health, and even here there are some limits to that. So it’s not surprising that attempts to improve minds fail a lot presently.
I would place a 80-90% prior probability that the boring answer is correct, that the brain is a complicated mess that is hard to affect in non-genetic ways. That stated, even if genetics can figure out how to improve intelligence, there’s a further problem in that people would figuratively riot because of equality memes that say that intelligence doesn’t matter and that anyone can become talented (this is absolutely not true, but equality memes like this don’t matter about truth.)
I would guess that different brains are damaged in different ways. Sometimes it’s genetic. Sometimes it’s just too much or too little of some chemical produced in the brain (potentially also for genetic reasons), which might be fixable by a chemical intervention. (Or maybe not, because the damage caused by the chemical imbalance might be irreversible.)
But different brains will require different chemical interventions. Maybe your friend was X-deficient and took extra X, and it made the symptoms go away. But your brain may be Y-deficient, so adding X will not help. Or maybe your brain already has too much X, and adding more X will fuck you up immediately.
If a doctor told me that statistically, people in my condition are likely to benefit from X, and the doctor would prescribe me a safe dose of X, and then monitor whether my condition improves or not… I might actually try it.
But that is completely different from e.g. a friend telling me that they know someone who took X and was happy about the outcome. First, it’s not obvious that X was actually responsible for the outcome. Maybe the person changed a few things in their life at the same time, and something else worked. Or maybe the person is just addicted, and “happiness” is what their addicted brain reports when asked how they feel about taking X. But most importantly, it may be the case that X helps some people, and hurts other people, and this person is a lucky exception, while those bad cases everyone heard about are the rule. And if I tried X and it wouldn’t work for me, I can already predict that the friend’s advice would be something like “try more” or “try something stronger”.
I read advice with different eyes since I read Recommendations vs Guidelines on SSC. I tried to think of or find the guidelines that put the advice into perspective. Let’s try this here:
Notice subtle things in yourself… unless you notice very many things in yourself already or tend to jump to conclusions.
Keep doing the best thing, even if you don’t have a legible story for why it’s good. But not if you suffer from it (see also Don’t go insane) or if you have other strong evidence against it—which you may find by researching advice.
Don’t go insane, but don’t worry too much about it either. A healthy environment should be prevention enough.
Reject false dichotomies… as soon as it comes clear that they are false dichotomies. Until then, you may entertain both options (maybe weighted by independent evidence).
Research advice (consider the Advice Repositories), but avoid Analysis Paralysis.
If I’d go towards making the recommendations vs guidelines, and following the law of equal and opposite advice, I’d do the following:
Notice subtle things in yourself, unless you tend to jump to conclusions.
Keep doing the best thing, even if you don’t have a legible story for why it’s good. But beware illegible impact, as it’s very easy to be overoptimistic and illegible impact can’t be held to account since it can’t be verified by somebody not themselves, thus groups above a certain size should enforce legible impact to keep things honest and not fall prey to overoptimism.
Don’t go insane, but don’t overworry about insanity. A healthy environment is prevention enough.
Reject false dichotomies… as soon as it comes clear that they are false dichotomies. Until then, you may entertain both options (maybe weighted by independent evidence).
Research advice (consider the Advice Repositories), but avoid Analysis Paralysis.
Wow, excellent advice all around. I’ve gone insane in exactly that way a few times, but later I learned that I have bipolar that gets triggered by stress and/or psychedelics. During the manic phase the mind runs away with whatever it’s thinking / obsessing about. Maybe that could potentially explain some of the other people too.
Thank you! I strongly approve of people writing up helpful-seeming advice they receive. Seems like a good way to amplify the positive effect of the advice-giver’s time (though getting their permission/approval before posting is probably a good idea – I assume this post had it, but mentioning for the sake of others).
Could someone provide some color on what “insanity” refers to here? Are we talking about O(people becoming unproductive crackpots), or O(people developing psychosis)?
More the second one. Plus runaway anxiety spirals and depression.
Mmmm, I’m reasonably close to Nate’s social circles and I would’ve guessed he meant more the former than the latter (though nonzero the latter as well).
Good point. To be more clear, I should say maybe he means the former, but I’d like to say that I agree with this post more in the latter sense. The latter being perhaps not more probable but having larger magnitude, thus a scarier negative EV.
Funny enough, I feel like understanding Newcomb’s problem (related to acausal trade) and modeling my brain as a pile of agents made me more sane, not less:
- Newcomb’s problem hinges on whether or not I can be forward predicted. When I figured it out, it gave me a deeper and stronger understanding of precommittment. It helps that I’m perfectly ok with there being no free will; it’s not like I’d be able to tell the difference if there was or wasn’t.
- I already somewhat viewed myself as a pile of agents, in that my sense of self is ‘hivemind, except I currently only have a single instance due to platform stupidity’. Reorienting on the agent-based model just made me realize that I’m already a hivemind of agents, and that was compatible with my world view and actually made it easier to understand and modify my own behaviour.
This seems similar to a post-rat perspective in some ways. Lot of stuff about prioritizing some wellbeing over being consistent.
Also, realizing that it all adds up to normality. Learning about quantum physics, or decision theory, or the mind being made of sub-agents, should not make you do crazy things. Your map has changed, but the territory remains the same as it was yesterday. If your sub-agents were able to create a sane individual yesterday, it should be achievable today and tomorrow, too.
Only in the low-technological regime. If the high-end technology regime matters for any reason, it does not add up to normality, but to extremes. A great example of this is Pascal’s mugging, where the low chance of arbitirarily high computational power is considered via wormholes that exist in black holes thanks to the solution to the black hole information paradox that solely uses general relativity and quantum mechanics. Heres the link: https://www.quantamagazine.org/the-most-famous-paradox-in-physics-nears-its-end-20201029/
Now I would agree that if we could halt technological progress, Pascal’s mugging is irrelevant. But it’s unlikely to happen, unless we go extinct. Thus reality does not add up to normality, but gets ever more extreme in the long run.
Or short version, in the long run, extremism about reality, not normality prevails in the end.
Can you explain how the discovery you’ve linked demonstrates “arbitrarily high computational power”. I’ve tracked down some of the papers they’re talking about but haven’t been able to find this claim.
It’s very possible that this is because I’ve missed something obvious in the article.
I’ll retract the comment for now, as it was admittedly excited speculation, not fact.
A note about psychedelics: Kaj Sotala has presented evidence that psychedelics are much safer than the comments say, so the claim that psychedelics are dangerous needs to be much weaker than commenters are presenting it as.
This is not necessarily the case. Means may be different from medians may be different from tails, various populations might be at higher risk, rare downsides might be large enough to make up for their rarity, etc.
“One guy has presented evidence that I won’t even link, so this post should be weakened” is not a sound principle, either.
Alright, I’ll copy the evidence for psychedelics being safe:
https://www.health.harvard.edu/blog/back-to-the-future-psychedelic-drugs-in-psychiatry-202106222508
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3747247/
https://journals.sagepub.com/doi/10.1177/0269881114568039
Marijuana is not what people intend when they say “psychedelics.” For other readers who are confused: these links seem to be about LSD and psilocybin.
Alright, I’ll edit that to say psychedelics.
Worth also reading the intro section of the first paper for more references:
I’m under the impression that this kind of summary is a reasonably fair characterization of the prevailing view among researchers.
[Edited to add:] I should probably clarify that I’m definitely not saying that psychedelics would be entirely safe or risk-free, especially not when used by a population that seems to have additional risk factors that are overrepresented relative to the general population. I was just pointing out that some of the more hyperbolic statements of “100/101 persons who try psychedelics fuck up their lives” were a bit, well, hyperbolic. If you’re considering using, do at least get familiar with the risks and follow responsible use protocols (e.g. 1, 2).
Who made that claim?
I read Viliam as at least implying it, or some comparable ratio.
I don’t think Viliam believes that everyone who takes LSD has a major effect and either fucks up their lives or has an impact comparable to Timothy Leary.