6 non-obvious mental health issues specific to AI safety
Intro
I am a psychotherapist, and I help people working on AI safety. I noticed patterns of mental health issues highly specific to this group. It’s not just doomerism, there are way more of them that are less obvious.
If you struggle with a mental health issue related to AI safety, feel free to leave a comment about it and about things that help you with it. You might also support others in the comments. Sometimes such support makes a lot of difference and people feel like they are not alone.
AI safety is a rather unusual field
The problems described in this post arise because AI safety is not an ordinary field to work in.
Many people within the AI safety community believe that it might be the most important field of work, but the general public mostly doesn’t care that much. Also, the field itself is extremely competitive and newcomers often have hard time getting a job.
No one really knows when we will create AGI, and whether we will be able to keep it aligned. If we fail to align AGI, the humanity might go extinct, and even if we succeed, it will radically transform the world.
Patterns
AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself “Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless.”
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it’s very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn’t have meaning.
Check out my post about Alex’s problem, and possible solutions for it.
I don’t know when we will create AGI and if we will be able to align it, so I feel like I have no control over it.
Bella is an anxious person, and she recently got interested in AI safety and she realized that nobody know for sure how to align AGI.
She feels that AGI might pose an extreme danger, and there is nothing she can do. She even can’t understand how much time do we have. A year? Five years? This uncertainty makes here even more anxious. And what if the takeoff will be so rapid that no one will understand what is going on?
Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn’t help, and only makes Bella more anxious. She feels like even her therapist doesn’t understand her.
AI safety is a big part of my life, but others don’t care that much about it. I feel alienated.
Chang is an ML scientist working on mechanistic interpretability in AI lab. AI safety consumed all his life and became a part of his identity. He constantly checks AI safety influencers on Twitter, he spends a lot of time reading LessWrong and watching AI podcasts. He even made a tatoo of a paperclip.
Chang lives outside of major AI safety hubs, and he feels a bit lonely because there is no one to talk about AI safety in person.
Recently he attended his aunt’s birthday party. He talked about alignment with his family. They were a bit curious about the topic, but didn’t care that much. Chang feels like they just don’t get it.
Working on AI safety is so important that I neglected other parts of my life and burned-out.
Dmitry is an undergrad student. He believes that AI safety is the most important thing in his life. He either thinks about AI safety or works on it all the time. He has never worked this hard in his life, and it’s hard for him to realize that neglect of other parts of life and failure to compartmentalize AI safety is a straight way to a burnout. When this burnout happens, at first, he doean’t understand what has happened and became depressed because he can’t work on AI safety.
People working on AI safety are extremely smart. I don’t think I am good enough to meaningfully contribute.
Ezra recently graduated from a university where he did research on transformers. He wants to work on AI safety, but it seems like everyone in major AI labs and AI safety orgs are extremely talented, and have an exceptional education. Ezra feels so intimidated by this, that it’s hard for him to even try doing something.
After a while he finally applies to a number of orgs, but he gets rejected everywhere, and other people share their similar experience. It seems like there are dozens of smart and young people applying for each position.
He feels demotivated, and he also need to pay his bills, so he decides to work in a non-AI safety company which makes him sad.
Check out my post about Impostor syndrome in AI safety, and how to overcome it.
So many smart people think that AI alignment is not that big of a problem. Maybe I just overreacting?
Francesca is a computer scientist working in academia. She is familiar with machine learning, but it’s not the focus of her work. She also believes that arguments for existential risks are solid, and she worries about it.
Francesca is curious what top ML scientists think about AI safety. Some of them believe that x-risks are serious, while many others don’t worry about them that much, and they think of AI doomers as weirdos.
Francesca feel confused because of that. She still thinks that arguments for the existential risk are solid, but social pressure sometimes makes her think that this whole alignment problem might not be that serious.
Epilogue
If you struggle with the sense of meaninglessness due to AGI, and believe that you might benefit from professional help, then I might help as a therapist or suggest other places where you can get professional help.
Check out my profile description to learn more about these options.
- Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) by 10 May 2023 19:04 UTC; 255 points) (
- GPT-powered EA/LW weekly summary by 23 Aug 2023 21:23 UTC; 59 points) (EA Forum;
- Let’s talk about Impostor syndrome in AI safety by 22 Sep 2023 13:51 UTC; 29 points) (
- Impending AGI doesn’t make everything else unimportant by 4 Sep 2023 12:34 UTC; 28 points) (
- GPT-powered EA/LW weekly summary by 25 Aug 2023 18:19 UTC; 18 points) (
- Let’s talk about Impostor syndrome in AI safety by 22 Sep 2023 14:06 UTC; 4 points) (EA Forum;
- 19 Aug 2023 21:13 UTC; 2 points) 's comment on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) by (
Thank you for posting this.
In the context of AI safety, I often hear statements to the effect of
There’s a very important, fundamental mistake being made there that can be easy to miss: worrying doesn’t help you accomplish any goal, including a very grand one. It’s just a waste of time and energy. Terrible habit. If it’s important to you that you suffer, then worrying is a good tactic. If AI safety is what’s important, then by all means analyze it, strategize about it, reflect on it, communicate about it. Work on it.
Don’t worry about it. When you’re not working on it, you’re not supposed to be worrying about it. You’re not supposed to be worrying about something else either. Think a different thought, and both your cognitive work and emotional health will improve. It’s pure upside with no opportunity cost. Deliberately change the pattern.
To all those who work on AI safety, thank you! It’s extremely important work. May you be happy and peaceful for as long as your life or this world system may persist, the periods of which are finite, unknown to us, and ultimately outside of our control despite our best intentions and efforts.
Your comment is somewhat along the lines of the stoic philosophy.
Very insightful post. Here are personal thoughts with low epistemic status and high rambling potential:
These all feel to me like corollaries to the belief “AGI is so important that I can’t gauge the value of anything else except in regards to how it affects AGI”. Hence: “everything else is meaningless because AGI will change everything soon” or “nobody around me is looking up at the meteor about to hit us and that makes me feel kind of insane. (*Cough* so I hang out with rationalists, whose entire shtick is learning how not to be insane)”.
As for other non-obvious effects: I personally feel some sort of perceived fragility around the whole field. There are arguments on this site for why AGI alignment should not be discussed in politics or why attempting to convince OpenAI or DeepMind employees to switch jobs can easily backfire (eg this post for caution advice). These make any outreach at all seem risky. There are also people I know wondering whether they should attempt to do anything at all relative to alignment, because they perceive themselves as probable dead weights. The relatively short timelines, the sheer scope, and the aura of impossibility around alignment seem to make people more cautious than they otherwise should be. Obviously the whole point of the field is to be cautious; but while it’s true that the tried-and-tested scientific method isn’t safe for AGI in general I’m not sure stressing the rationalist-tools solve-problems-before-you-experiment approach is healthy everywhere. So, caution is right there in the description of the field, but you have to make sure you contain it well so that it doesn’t infect places where you would do good to be reckless and use trial-and-error. I am probably quite wrong about this but I don’t see many people talking about it, so if there’s any reasonable doubt we should figure it out.
Alignment work should probably be perceived as less fragile. Unlike the AI field in general, alignment projects specifically don’t pose much of a risk to the world. So we can probably afford to be more loose here than elsewhere. In my experience alignment feels like a pack of delicate butterflies flying together, with every flap of wings sending dozens of comrades spiraling out of the sky, which might or might not set off a domino/Rube Goldberg machine that blows up the world.
Alignment is also perceived as fragile. Almost all paradigms of alignment and AI safety research (interpretability, agent foundations, prosaic alignment, model encryption, etc.) Are often criticised on LW by different people as at best totally ineffectual from the opportunity cost perspective, and at worst downright harmful due to some unforeseen effects or as safety-washing enablers for AGI labs. (I myself am culpable of many such criticisms.)
OTOH, this very work on the strategy and methodology of AI safety development could be reasonably criticised as worsening the psychological state of AI safety researchers and therefore potentially net harmful despite its marginal improvements to strategy and methodology (if these even happen in practice, which is not clear to me).
The alienation is something I felt for a bit, until I started working on my project and working with folk, talking to folk, etc. Also, been very pleasantly surprised how receptive non AI/non-tech folk are when talking to them about AI risk, as long as it’s framed in a down to earth, relatable manner, introduced organically, etc.
Thanks for sharing your experience. My experience is that talking with non-AI safety people is similar to talks about global warming. If someone tells me about that, I say that this is an important issue, but I honestly don’t invest that much effort to fight against it.
This is my experience, and yours might be different.
How would one find a therapist in their local area who’s aware of what’s going on in the EA/rat circles such that they wouldn’t find statements about, say, x-risks as being schizophrenic/paranoid?
I think the recent public statements, media coverage, public discussions, government activity, YouGov polls, etc. have moved the worry about AI x-risk sufficiently into the Overton window. A psychotherapist or a psychiatrist who would suspect paranoia or schizophrenia mainly/primarily upon the expression of such worries today is just a very bad professional.
I feel like Ezra. I’ve also gotten various sources of feedback that make me think I might not be cut out for “top level” alignment research… but I find myself making progress on my beliefs about the problem, just slower than others.
Thoughts? Advice?
Same here. The work I’m doing may not align with conventional thinking or be considered part of the major alignment work being pursued, but I believe I’ve used my personal efforts to understand the alignment problem and the complex web of issues surrounding it.
My advice? Continuously improve your methods and conceptual frameworks, as that will drive much of the progress in the complexity and intricacy of this field. Good luck with your progress!!
Also because nobody has a great solution yet for alignment, I can see that it is very easy for any work to be heavily critiqued. In other domains, you can feel like you are contributing something valuable even if you aren’t doing anything ground-breaking. This is slightly different from not feeling smart enough I think. Although it hasn’t happened to me (yet!) because I haven’t shared any of my ideas publicly, I can see that the constant critique is something that could be quite demotivating.
As a newcomer too, my experience of the community is that is has felt much less supportive than in other technical fields I have worked in (although I have also met some people who are lovely exceptions!). It has certainly made me question somewhat if it is an area that I want to work in. I’m not sufficiently convinced that I’m going to solve alignment that it feels imperative for me to work in the field and I still feel I have lots of agency about whether I do or not. However, for somebody who doesn’t feel that sense of agency or who hasn’t experienced different communities, I can imagine that it might affect their mental health slightly subtly and perniciously without them realising its impact.
There are also some mental issues among people who know about AI safety concerns, but are not researchers themselves and not even remotely capable of helping or contributing in a meaningful way.
I for one, learned about the severity of the AI threat only after my second child was born. Given the rather gloom predictions for the future, Im concerned for their safety, but there does not seem anything I can do to ensure they would be ok once the Singularity hits. It feels like I brought my kids to life just in time for the apocalypse to hit them when they are still young adults at best, and irrationally, I cannot stop thinking that Im thus responsible for their future suffering.
I feel like this is an instance of a more general issue: In general, we are bad at rescaling utility when we encounter new situations, and our non-utilitarian way of evaluating outcomes can lead us into very large amounts of pain. The issue is basically that utopia and doom/dystopia are the limiting cases of the problem of information that appears to change their utility calculations vastly, especially negative utility, so psychological problems appear like denialism or guilt.
Essentially, the way to handle this problem is to do 2 things:
Reset the 0 point, such that the new information means that your 0 point is the way the world works now.
Rescale utilities such that instead of treating vastly important problems as things where you treat them as having massive disutility or utility, instead go in the opposite direction. Rescale utilities such that other problems have less utility than this problem and maintain something approximating a normal utility for even the most important problems.
philip_b has the gory details on that process, and it’s worth taking a look at it:
In general, I kinda wished rationalists would make their pitches, at least later on as essentially about caring less about certain problems, rather than caring more about x cause.
What is this 0 point?
Essentially what you count as neutral, or what you consider to be normal, as distinguished from negative or positive states of the world.
Typo nitpicks (suggestions): “humanity might extinct” --> “go extinct” / “everything else seem unimportant and meaningless” --> “everything else seems” / “non-AI safety company” --> “a non-AI safety company” / “he feels demotivated, he also needs” --> “he feels demotivated, but he also needs” / “version of existential problem” --> version of the existential problem”
Thanks. I am not a native English speaker, and I use GPT-4 to help me catch mistakes, but it seems like it’s not perfect :)
The meta-problem everyone is navigating—and this is the meta-advice, and finding the answers for ourselves is unique to our own parameterized realities. Well said here.
I wonder to what extent you meant it when you said those were specific to AI safety? I’m not at all involved in that (but on the other hand I’m still probably on LW far too much), and I liter have all of them. Or did you mean ‘here is how some common-ish psychological issues manifest themselves in an AI safety context’ ?
These problems are not unique to AI safety, but they are present way more often with my clients working on AI safety, than with my other clients.
Yeah, I’d have guessed as much
Maybe it’s a sign I should get into AI safety, then /j
This is tricky. May it exacerbate your problems?
Anyway. If there’s a chance I can helpful for you, let me know.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
Thank you so much for posting this. It feels weird to tick every single symptom mentioned here...
The burnout that ‘Dmitry’ experiences is remarkably accurate for what am I experiencing, Are there any further guides on how to manage this? It would help me so much, any help is appreciated:)