Disclaimer: This is *not* a call to action in any form, I neither endorse any form of violence nor the hold the radical anti-AI views that might potentially prescribe assasinations. The question was the result of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here.
Tldr: There’s a lot of people in the EA/RAT space alone, who strongly believe in short AI timelines, have a high p(doom) and devote their entire career to slowing down AI. Many of those people have access to money and live in California. Why has nobody even tried to kill Sam Altman or other high impact individuals who heavily contribute to speed up AI?
Underlying Beliefs:
Within EA/Rationality community there’s a lot of people with relatively short AI timelines and a high p(doom). Exaggeratedly speaking they might think “we’re all gonna die in 5 years”. A lot of those people are highly educated in AI, think they know a lot the market hasn’t priced in and yet they feel relatively powerless in impacting the AI trajectory. Many of those people strive for having a high impact, devote a significant part of their lifes to an AI safety fight, many yudkowsky-like people think is hopelessly lost. Overall the course of how and how fast AI develops seems to be strongly determined by a very small set of people—even many highly educated AI researchers in SF have negligible impact, while a few high profile people like Sam Altman concentrate a lot of public attention, decision making power and attract capital. If one was to succeed in assasinating him that would probably significantly slow down AI directly for obvious reasons. But addionally even if it failed it would set a precedent making it much less attractive to be a shining AI accelerationist. Potentially this would push some of the decision making behind closed doors, but much of the acceleration happening only works because he’s such a shiny individual that attracts investors, users, companies etc. So also indirectly there could be a significant slow-down from making it less cool “to be AI”. This makes me think that many of those “high impact saving the world from AI” people, who feel paralyzed because of their complete absence of impact on short AI timelines and their belief that AI kills them soon anyways would spot an opportunity to actually have radical impact and survive, if they were to think that an assasination would be realistic to achieve.
Now many of the more radical rat/EA people live in california, where Altman lives too. Many of those people have access to some funds. Weapons are relatively easily accessible in the US and even organised crime is big enough in the US that professional assassines/hitmen are available for something in the ballpark of ~50k. Those AI leaders live under a much less strict security regime than partisan top politicians like donald trump. I would find it surprising that if a few highly educaeted rationally minded people with some funds who were to organise themselves well, would not stand a chance in succeeding in assasinating Altman and a couple of other leaders in a series of strikes on one day, radically changing the way AI is being accellerated at the moment.
For most people with normal ethical beliefs like myself that would obviously an extremely evil thing to do. However I reckon that not everybody shares my beliefs and in those communites there are many strict utilitarians who say “if I could have killed Putin 3 years ago, I would have done it”—and given their beliefs about AI Altman might be in an analogous position. However I haven’t seen a single article or news headline about even an attempt to do so. Why is that not happening? Are there obvious errors in my reasoning I don’t see?
I want to emphasise again that I do not endorse this at all and strictly oppose any violence. This reasoning was the product of a philosophical debate I had with my roommate yesterday and we did not come to a conclusion, leading to this post here.
Equilibria against outright violence seems like one of the great accomplishments of civil society. We don’t let might make right, and there’s an argument that things could devolve pretty quickly once violence is on the table.
I suspect that when people imagine violence on the margin, they’re just assuming they do some violence and nobody else responds in kind. More realistically, violence gets violent retaliation, people stop talking or arguing (which I think there is hope for), and stuff gets a lot worse.
Asymmetric weapons and “argument gets argument, not bullet” are relevant here.
You might claim that building unsafe AGI is itself violence, and I see the case for that, but that’s on a non-universally accepted set of beliefs (contrast sticking bullets in people), and one could also claim that for every day AGI is delayed, millions more die, and therefore anyone contributing to those delays is committing violence that justifies violence against them.
I’d rather stay out of worlds where things go that way. The strong deontological taboos are there for good reasons. Humans run on corrupted hardware and our civilization largely seems sane enough to say “no murder, no exceptions”. Well, for private individuals. Some people do need to be stopped and we have institutions for that (police, governments, etc) that do accomplish a lot (compare places with functional law enforcement and not). And within that approach, getting everyone to agree that if you take action X that is agreed up on bad, the state violence monopoly will stop you, is a good, kinda asymmetric outcome. Hence, AI policy and government intervention, which is not a bad idea if done right.
To get a little more philosophical, I’m staunchly of the “day-to-day actions get driven by deontology and virtue ethics by and large, but the deontology and virtue ethics are justified by consequentialist reasons. And in this case, I think there’s solid consequentialism backing up the deontology and taboo here, and only myopia makes it seem otherwise.
The same reason there are not constant assassination attempts in general: it’s outside the Overton Window of acceptable actions. One of the many benefits of civilization is that we’ve all agreed not to kill each other, even when it seems strategically beneficial in the short-term, because that’s what’s necessary to create the world we’d like to live in long-term. Defection against this norm is harshly punished to maintain the norm.