Equilibria against outright violence seems like one of the great accomplishments of civil society. We don’t let might make right, and there’s an argument that things could devolve pretty quickly once violence is on the table.
I suspect that when people imagine violence on the margin, they’re just assuming they do some violence and nobody else responds in kind. More realistically, violence gets violent retaliation, people stop talking or arguing (which I think there is hope for), and stuff gets a lot worse.
Asymmetric weapons and “argument gets argument, not bullet” are relevant here.
You might claim that building unsafe AGI is itself violence, and I see the case for that, but that’s on a non-universally accepted set of beliefs (contrast sticking bullets in people), and one could also claim that for every day AGI is delayed, millions more die, and therefore anyone contributing to those delays is committing violence that justifies violence against them.
I’d rather stay out of worlds where things go that way. The strong deontological taboos are there for good reasons. Humans run on corrupted hardware and our civilization largely seems sane enough to say “no murder, no exceptions”. Well, for private individuals. Some people do need to be stopped and we have institutions for that (police, governments, etc) that do accomplish a lot (compare places with functional law enforcement and not). And within that approach, getting everyone to agree that if you take action X that is agreed up on bad, the state violence monopoly will stop you, is a good, kinda asymmetric outcome. Hence, AI policy and government intervention, which is not a bad idea if done right.
To get a little more philosophical, I’m staunchly of the “day-to-day actions get driven by deontology and virtue ethics by and large, but the deontology and virtue ethics are justified by consequentialist reasons. And in this case, I think there’s solid consequentialism backing up the deontology and taboo here, and only myopia makes it seem otherwise.
Equilibria against outright violence seems like one of the great accomplishments of civil society. We don’t let might make right, and there’s an argument that things could devolve pretty quickly once violence is on the table.
I suspect that when people imagine violence on the margin, they’re just assuming they do some violence and nobody else responds in kind. More realistically, violence gets violent retaliation, people stop talking or arguing (which I think there is hope for), and stuff gets a lot worse.
Asymmetric weapons and “argument gets argument, not bullet” are relevant here.
You might claim that building unsafe AGI is itself violence, and I see the case for that, but that’s on a non-universally accepted set of beliefs (contrast sticking bullets in people), and one could also claim that for every day AGI is delayed, millions more die, and therefore anyone contributing to those delays is committing violence that justifies violence against them.
I’d rather stay out of worlds where things go that way. The strong deontological taboos are there for good reasons. Humans run on corrupted hardware and our civilization largely seems sane enough to say “no murder, no exceptions”. Well, for private individuals. Some people do need to be stopped and we have institutions for that (police, governments, etc) that do accomplish a lot (compare places with functional law enforcement and not). And within that approach, getting everyone to agree that if you take action X that is agreed up on bad, the state violence monopoly will stop you, is a good, kinda asymmetric outcome. Hence, AI policy and government intervention, which is not a bad idea if done right.
To get a little more philosophical, I’m staunchly of the “day-to-day actions get driven by deontology and virtue ethics by and large, but the deontology and virtue ethics are justified by consequentialist reasons. And in this case, I think there’s solid consequentialism backing up the deontology and taboo here, and only myopia makes it seem otherwise.