I don’t know! I’ve certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn’t stop progress, but would slow it down—and if you think there is a chance alignment will be solved, the more time you buy the better.
ArisC
I am bringing it up for calibration. As to whether it’s the same magnitude of horrific: in some ways, it’s higher magnitude, no? Even Nazis weren’t going to cause human extinction—of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn’t easily forgive a drunk driver who runs over a child...
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
They’re not the same. None of these are extinction events; if preventing the extinction of the human race doesn’t legitimise violence, what does? (And if you say nothing, does that mean you don’t believe in the enforcement of laws?)
Basically, I can’t see a coherent argument against violence that’s not predicated either on a God, or on humanity’s quest for ‘truth’ or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn’t ban violence to prevent this outcome.
The assassination of Archduke Ferdinand certainly coerced history, and it wasn’t state-backed. So did that of Julius Ceasar, as would have Hitler’s, had it been accomplished.
Well, it’s clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI—in which case, every day they’re not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made—or you think they’re not making progress anyway, so why are you worried?
But AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.
That’s not true - we don’t make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing—it’s a matter of expected value.
Except that violence doesn’t have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?
If it’s true AI labs aren’t likely to be the cause of extinction, why is everyone upset at the arms race they’ve begun?
You can’t have it both ways: either the progress these labs are making is scary—in which case anything that disrupts them (and hence slows them down even if it doesn’t stop them) is good—or they’re on the wrong track, in which case we’re all fine.
Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?
Here’s my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don’t sanction violence to prevent human extinction, when do you ever sanction it? (And I don’t think it’s stretching the definition to suggest that law requires violence).
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
OK, so then AI doomers admit it’s likely they’re mistaken?
(Re side effects, no matter how negative they are, they’re better than the alternative; and it doesn’t even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)
This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?
Isn’t the prevention of the human race one of those exceptions?
Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm—and preventing it surely justifies violence (if it doesn’t, then what does?)
Yes but what I’m saying is that this isn’t true—few people are absolute pacifists. So violence in general isn’t taboo—I doubt most people object to things like laws (which ultimately rely on the threat of violence).
So why is it that violence in this specific context is taboo?
So, you would have advocated against war with Nazi Germany?
To be fair, I’m not saying it’s obviously wrong; I’m saying it’s not obviously true, which is what many people seem to believe!
Successful attacks would buy more time though