You’re assuming “the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)”. But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn’t work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.
To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don’t. Either way, they didn’t actually stop the development of AI because other labs will still continue the work. But now, when people are considering who to listen to about AI safety, the “AI risk is high” people get lumped in with crazy terrorists and sidelined.
I don’t know! I’ve certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn’t stop progress, but would slow it down—and if you think there is a chance alignment will be solved, the more time you buy the better.
If you think P(doom) is 1, you probably don’t believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
You’re assuming “the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)”. But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn’t work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.
To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don’t. Either way, they didn’t actually stop the development of AI because other labs will still continue the work. But now, when people are considering who to listen to about AI safety, the “AI risk is high” people get lumped in with crazy terrorists and sidelined.
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
Well… Yeah? Would any of us care to build knowledge that improves our odds if our odds were immovably terrible?
I don’t know! I’ve certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn’t stop progress, but would slow it down—and if you think there is a chance alignment will be solved, the more time you buy the better.
If you think P(doom) is 1, you probably don’t believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.