I agree with this. I find it very weird to imagine that “10% x-risk this century” versus “90% x-risk this century” could be a crux here. (And maybe it’s not, and people with those two views in fact mostly agree about governance questions like this.)
Something I wouldn’t find weird is if specific causal models of “how do we get out of this mess” predict more vs. less utility for state interference. E.g., maybe you think 10% risk is scarily high and a sane world would respond to large ML training runs way more aggressively than it responds to nascent nuclear programs, but you also note that the world is not sane, and you suspect that government involvement will just make the situation even worse in expectation.
I agree with this. I find it very weird to imagine that “10% x-risk this century” versus “90% x-risk this century” could be a crux here. (And maybe it’s not, and people with those two views in fact mostly agree about governance questions like this.)
Something I wouldn’t find weird is if specific causal models of “how do we get out of this mess” predict more vs. less utility for state interference. E.g., maybe you think 10% risk is scarily high and a sane world would respond to large ML training runs way more aggressively than it responds to nascent nuclear programs, but you also note that the world is not sane, and you suspect that government involvement will just make the situation even worse in expectation.