[Question] Why haven’t there been assassination attempts against high profile AI accelerationists like sam altman yet?

Disclaimer: This is *not* a call to action in any form, I neither endorse any form of violence nor the hold the radical anti-AI views that might potentially prescribe assasinations. The question was the result of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here.

Tldr: There’s a lot of people in the EA/​RAT space alone, who strongly believe in short AI timelines, have a high p(doom) and devote their entire career to slowing down AI. Many of those people have access to money and live in California. Why has nobody even tried to kill Sam Altman or other high impact individuals who heavily contribute to speed up AI?

Underlying Beliefs:
Within EA/​Rationality community there’s a lot of people with relatively short AI timelines and a high p(doom). Exaggeratedly speaking they might think “we’re all gonna die in 5 years”. A lot of those people are highly educated in AI, think they know a lot the market hasn’t priced in and yet they feel relatively powerless in impacting the AI trajectory. Many of those people strive for having a high impact, devote a significant part of their lifes to an AI safety fight, many yudkowsky-like people think is hopelessly lost. Overall the course of how and how fast AI develops seems to be strongly determined by a very small set of people—even many highly educated AI researchers in SF have negligible impact, while a few high profile people like Sam Altman concentrate a lot of public attention, decision making power and attract capital. If one was to succeed in assasinating him that would probably significantly slow down AI directly for obvious reasons. But addionally even if it failed it would set a precedent making it much less attractive to be a shining AI accelerationist. Potentially this would push some of the decision making behind closed doors, but much of the acceleration happening only works because he’s such a shiny individual that attracts investors, users, companies etc. So also indirectly there could be a significant slow-down from making it less cool “to be AI”. This makes me think that many of those “high impact saving the world from AI” people, who feel paralyzed because of their complete absence of impact on short AI timelines and their belief that AI kills them soon anyways would spot an opportunity to actually have radical impact and survive, if they were to think that an assasination would be realistic to achieve.

Now many of the more radical rat/​EA people live in california, where Altman lives too. Many of those people have access to some funds. Weapons are relatively easily accessible in the US and even organised crime is big enough in the US that professional assassines/​hitmen are available for something in the ballpark of ~50k. Those AI leaders live under a much less strict security regime than partisan top politicians like donald trump. I would find it surprising that if a few highly educaeted rationally minded people with some funds who were to organise themselves well, would not stand a chance in succeeding in assasinating Altman and a couple of other leaders in a series of strikes on one day, radically changing the way AI is being accellerated at the moment.

For most people with normal ethical beliefs like myself that would obviously an extremely evil thing to do. However I reckon that not everybody shares my beliefs and in those communites there are many strict utilitarians who say “if I could have killed Putin 3 years ago, I would have done it”—and given their beliefs about AI Altman might be in an analogous position. However I haven’t seen a single article or news headline about even an attempt to do so. Why is that not happening? Are there obvious errors in my reasoning I don’t see?

I want to emphasise again that I do not endorse this at all and strictly oppose any violence. This reasoning was the product of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here.