People fiercely argue whether AGI will likely be an existential threat or not.
Most of the arguments explore conceptual, technical, or governance aspects of the topic, and are based on reason, logic, and predictions, but things are so complex and uncertain that sometimes I can hardly tell, who is right, especially when I’m not an expert on the topic.
Because of this complexity people often make judgement based on their personal preferences and emotions, and this post is a deep dive into psychological reasons of why people might dismiss existential risks from AGI. I don’t mean that there are no rational arguments for this position, because there are. It’s just not the scope of this post.
Self interest and self-censorship
Alan is a high-ranking manager working on the LLM project at a tech giant. He believes that AI development is a great opportunity for him to climb up the corporate ladder and to earn a lot of money.
Alan is a speaker at a tech conference, and after his speech a journalist asks him about his thoughts on existential risks.
There are several things going on in Alan’s mind at that moment.
AI is a gateway for him to having a better career than all his friends. Also, with all the money he will earn, he’ll be able to buy the house of his wife’s dreams, and send his daughter to any university she wants and don’t care about the money.
Thoughts about x-risks threaten his dreams, so he tries to avoid them, and to prove to himself, that he is actually doing a good thing. “It’s not just me who will benefit from the technology. It will make the world a way better place for everyone, and those AI doomers are trying to prevent it.”
Alan also can’t publicly say that AI is a threat. PR department in his company won’t like it, and this will cause a lot of problems for him. So, the only thing that is safe for him to say is that that his company have excellent cyber security team, and they do extensive testing before they deploy their models, so there is no reason to worry.
Denial as a coping mechanism
Joep is a respected machine learning scientist, and he has an unhealthy habit to cope with stress by denying it.
He has narcissistic mother who didn’t show any affection to him when he was a kid. This was painful for him, and he learned to cope with this problem by denying it and telling himself that everything is good. Because of this, every time he experiences fear or anxiety, he tries to convince himself that the cause of his anxiety actually don’t exists. So, even he is anxious thinking about x-risks, he tries to convince himself that these risks don’t actually exist.
At the same tech conference at which Alan, the hero of the previous story, presented his work, a journalist approached Joep and asked him whether he is worried about the existential risks from AI.
Joep becomes visibly annoyed and tells the journalist that the only people who believe in these risks are fear-mongering Luddites. He tells her that his team spent 6 months before releasing their last LLM because they did thorough security testing, and in any critical application they always keep people in the loop. He also recalls an important prediction made by Yudkovsy that turned out to be completely false. “We come up with the new safety ideas every day. We’ll sort out all the upcoming problems”. It seemed like he wants to prove this to himself more that to the journalist.
After the interview he is still angry and annoyed, and repeats in his head all the arguments against existential risks he just told.
Social pressure and echo chambers
Ada, a young AI ethics researcher, felt a mix of excitement and nerves as she prepared to present her work on AI risks at a high-profile tech conference. The same conference at which we met Alan and Joep from the previous stories. She respected both of them and was excited to hear their talks.
Alan spoke about his company’s robust cybersecurity measures, radiating confidence that AI posed no threat. Joep followed, highlighting the responsible steps his team was taking in their work with AI. The audience was visibly reassured, and Ada started to doubt part of her own research that was focused on existential risks from AGI.
When it was her turn, Ada hesitated. Her presentation included slides about x-risks, but recalling Alan’s and Joep’s confidence, she was so anxious to look foolish, that she skipped over them. Instead, she echoed optimism about the future of AI, and said that there are many talented and responsible people, so there is no reason to worry.
As she left the stage, Ada felt a sense of relief but also an unsettling feeling that she wasn’t entirely honest. To feel better, she tried to convince herself that these experts know the AI field way better than her, so her fears about x-risks are probably erroneous.
People don’t have images of AI apocalypse
Dario and Claude are cohosts of an AI podcast. The subject of their newest episode is existential risks from AI. Claude is concerned about the risks, while Dario remains skeptical.
Once recording begins, Claude outlines the arguments that we won’t be able to control AI that is way smarter than humans, and this might lead to a disaster. Dario can’t fully agree. In his mind, AGI is still far off, and all these threats feel too distant and abstract, so he just can’t imagine how it can pose a threat to humanity.
Dario says “It’s not like AI can control nuclear weapons or something. I think that high confidence that AGI will want to destroy us is way speculative”. In order to believe, Dario needs vision. Something vivid and convincing, and not abstract principles, but each time he asks Claude about concrete scenarios of doom, his answers are vague and uncertain. The answers rely on high-level ideas like orthogonality thesis or “chimps can’t control humans, and we’ll be like chimps for AGI”.
Claude acknowledges the difficulty in envisioning concrete doomsday scenarios but insists that the absence of clear examples doesn’t negate the risk. He tells “Just because we can’t draw a picture doesn’t mean the danger isn’t real.”
Dario is not convinced. He believes that some people are too certain about their abstract ideas and ignore the reality in which we are making incremental progress in alignment.
Marginalization of AI doomers
Eli is a software developer who keeps up with the latest tech trends. Recently he started following AI developments, and got interested in the discussions around existential risks from AI, and he noticed that discussions about these risks are often emotional.
As he scrolled through Twitter, he found a thread written by an emotional doomer who hasn’t got any doubt about his views. He reminded Eli environmental activists, and he thought “there’s always some people that’s convinced the sky is falling.”
Eli believes that the climate change is a complicated problem that requires a lot of thought and effort to be solved, but overly-emotional and overconfident activists do more harm than good. They annoy people, and poison any thoughtful discussions.
Eli sees AI alarmists as similar people. He believes that they spoil the image of the AI safety community, and also make it hard to discuss more real and near-term problems like the spread of misinformation, or concentration of power in the hands of AI labs.
A friend recommended Eli the episode of the x-risk-themed episode of the AI podcast hosted by Dario and Claude. Eli decided to give it a listen during his commute. Alarmist Claude sounded like he is terrified by the existential risks, and even though he seems like a smart person, Eli immediately classifies him as an activist, and doesn’t take his arguements too seriously. “Ah, another one of those anxious Luddites” he thinks.
5 psychological reasons for dismissing x-risks from AGI
People fiercely argue whether AGI will likely be an existential threat or not.
Most of the arguments explore conceptual, technical, or governance aspects of the topic, and are based on reason, logic, and predictions, but things are so complex and uncertain that sometimes I can hardly tell, who is right, especially when I’m not an expert on the topic.
Because of this complexity people often make judgement based on their personal preferences and emotions, and this post is a deep dive into psychological reasons of why people might dismiss existential risks from AGI. I don’t mean that there are no rational arguments for this position, because there are. It’s just not the scope of this post.
Self interest and self-censorship
Alan is a high-ranking manager working on the LLM project at a tech giant. He believes that AI development is a great opportunity for him to climb up the corporate ladder and to earn a lot of money.
Alan is a speaker at a tech conference, and after his speech a journalist asks him about his thoughts on existential risks.
There are several things going on in Alan’s mind at that moment.
AI is a gateway for him to having a better career than all his friends. Also, with all the money he will earn, he’ll be able to buy the house of his wife’s dreams, and send his daughter to any university she wants and don’t care about the money.
Thoughts about x-risks threaten his dreams, so he tries to avoid them, and to prove to himself, that he is actually doing a good thing. “It’s not just me who will benefit from the technology. It will make the world a way better place for everyone, and those AI doomers are trying to prevent it.”
Alan also can’t publicly say that AI is a threat. PR department in his company won’t like it, and this will cause a lot of problems for him. So, the only thing that is safe for him to say is that that his company have excellent cyber security team, and they do extensive testing before they deploy their models, so there is no reason to worry.
Denial as a coping mechanism
Joep is a respected machine learning scientist, and he has an unhealthy habit to cope with stress by denying it.
He has narcissistic mother who didn’t show any affection to him when he was a kid. This was painful for him, and he learned to cope with this problem by denying it and telling himself that everything is good. Because of this, every time he experiences fear or anxiety, he tries to convince himself that the cause of his anxiety actually don’t exists. So, even he is anxious thinking about x-risks, he tries to convince himself that these risks don’t actually exist.
At the same tech conference at which Alan, the hero of the previous story, presented his work, a journalist approached Joep and asked him whether he is worried about the existential risks from AI.
Joep becomes visibly annoyed and tells the journalist that the only people who believe in these risks are fear-mongering Luddites. He tells her that his team spent 6 months before releasing their last LLM because they did thorough security testing, and in any critical application they always keep people in the loop. He also recalls an important prediction made by Yudkovsy that turned out to be completely false. “We come up with the new safety ideas every day. We’ll sort out all the upcoming problems”. It seemed like he wants to prove this to himself more that to the journalist.
After the interview he is still angry and annoyed, and repeats in his head all the arguments against existential risks he just told.
Social pressure and echo chambers
Ada, a young AI ethics researcher, felt a mix of excitement and nerves as she prepared to present her work on AI risks at a high-profile tech conference. The same conference at which we met Alan and Joep from the previous stories. She respected both of them and was excited to hear their talks.
Alan spoke about his company’s robust cybersecurity measures, radiating confidence that AI posed no threat. Joep followed, highlighting the responsible steps his team was taking in their work with AI. The audience was visibly reassured, and Ada started to doubt part of her own research that was focused on existential risks from AGI.
When it was her turn, Ada hesitated. Her presentation included slides about x-risks, but recalling Alan’s and Joep’s confidence, she was so anxious to look foolish, that she skipped over them. Instead, she echoed optimism about the future of AI, and said that there are many talented and responsible people, so there is no reason to worry.
As she left the stage, Ada felt a sense of relief but also an unsettling feeling that she wasn’t entirely honest. To feel better, she tried to convince herself that these experts know the AI field way better than her, so her fears about x-risks are probably erroneous.
People don’t have images of AI apocalypse
Dario and Claude are cohosts of an AI podcast. The subject of their newest episode is existential risks from AI. Claude is concerned about the risks, while Dario remains skeptical.
Once recording begins, Claude outlines the arguments that we won’t be able to control AI that is way smarter than humans, and this might lead to a disaster. Dario can’t fully agree. In his mind, AGI is still far off, and all these threats feel too distant and abstract, so he just can’t imagine how it can pose a threat to humanity.
Dario says “It’s not like AI can control nuclear weapons or something. I think that high confidence that AGI will want to destroy us is way speculative”. In order to believe, Dario needs vision. Something vivid and convincing, and not abstract principles, but each time he asks Claude about concrete scenarios of doom, his answers are vague and uncertain. The answers rely on high-level ideas like orthogonality thesis or “chimps can’t control humans, and we’ll be like chimps for AGI”.
Claude acknowledges the difficulty in envisioning concrete doomsday scenarios but insists that the absence of clear examples doesn’t negate the risk. He tells “Just because we can’t draw a picture doesn’t mean the danger isn’t real.”
Dario is not convinced. He believes that some people are too certain about their abstract ideas and ignore the reality in which we are making incremental progress in alignment.
Marginalization of AI doomers
Eli is a software developer who keeps up with the latest tech trends. Recently he started following AI developments, and got interested in the discussions around existential risks from AI, and he noticed that discussions about these risks are often emotional.
As he scrolled through Twitter, he found a thread written by an emotional doomer who hasn’t got any doubt about his views. He reminded Eli environmental activists, and he thought “there’s always some people that’s convinced the sky is falling.”
Eli believes that the climate change is a complicated problem that requires a lot of thought and effort to be solved, but overly-emotional and overconfident activists do more harm than good. They annoy people, and poison any thoughtful discussions.
Eli sees AI alarmists as similar people. He believes that they spoil the image of the AI safety community, and also make it hard to discuss more real and near-term problems like the spread of misinformation, or concentration of power in the hands of AI labs.
A friend recommended Eli the episode of the x-risk-themed episode of the AI podcast hosted by Dario and Claude. Eli decided to give it a listen during his commute. Alarmist Claude sounded like he is terrified by the existential risks, and even though he seems like a smart person, Eli immediately classifies him as an activist, and doesn’t take his arguements too seriously. “Ah, another one of those anxious Luddites” he thinks.