So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)