This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
and
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.
This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
and
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.