(In retrospect (and with the knowledge that these sorts of statements are always narrativizing a more complex past), this post was roughly the inflection point where I went gradually started moving from “Yudkowsky is a genius who is one of the few people thinking about the world’s biggest problems” to “lol, what’s Big Yud catastrophizing about today?” First seeing that he was wrong about some things meant that it was easier to think critically about other things he said, and here we are today, but that’s dragging the conversation in a very different direction than your OP.)
If I had to put down my own inflection point in where I started getting worried about Yudkowsky’s epistemics and his public statements around AI risk, it would be the Time article. It showed to me 2 problems:
Yudkowsky has a big problem with overconfidence, and in general made many statements on the Time article that are misleading at best, and the general public likely wouldn’t know the statements are misleading.
Yudkowsky is terrible at PR, and generally is unable to talk about AI risk without polarizing people. Given that AI risk is thankfully mostly unpolarized, and outside of politics, I am getting concerned that Yudkowsky is a terrible public speaker/communicator on AI risk, even worse than some AI protests.
Edit: I sort of retract my statement. While I still think Eliezer is veering dangerously close to hoping for warfare and possible mass deaths over GPU clusters, I do retract the specific claim of Eliezer advocating nukes. It was instead on a second reading airstrikes and acts of war, but no claims of nuking other countries. I misremembered the actual claims made in the Time article.
it would be the Time article and his calls for nuclear strikes on AI centers.
(edit: I see Noosphere has since edited his comment, which seems good, but, leaving this up for posterity)
He did not call for nuclear strikes on AI centers, and while I think this was an understandable thing to misread him on initially by this point we’ve had a whole bunch of discussions about it and you have no excuse to continue spreading falsehoods about what he said.
I think there are reasonable things to disagree with Eliezer on and reasonable things to argue about his media presence, but please stop lying.
This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
and
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.
If I had to put down my own inflection point in where I started getting worried about Yudkowsky’s epistemics and his public statements around AI risk, it would be the Time article. It showed to me 2 problems:
Yudkowsky has a big problem with overconfidence, and in general made many statements on the Time article that are misleading at best, and the general public likely wouldn’t know the statements are misleading.
Yudkowsky is terrible at PR, and generally is unable to talk about AI risk without polarizing people. Given that AI risk is thankfully mostly unpolarized, and outside of politics, I am getting concerned that Yudkowsky is a terrible public speaker/communicator on AI risk, even worse than some AI protests.
Edit: I sort of retract my statement. While I still think Eliezer is veering dangerously close to hoping for warfare and possible mass deaths over GPU clusters, I do retract the specific claim of Eliezer advocating nukes. It was instead on a second reading airstrikes and acts of war, but no claims of nuking other countries. I misremembered the actual claims made in the Time article.
(edit: I see Noosphere has since edited his comment, which seems good, but, leaving this up for posterity)
He did not call for nuclear strikes on AI centers, and while I think this was an understandable thing to misread him on initially by this point we’ve had a whole bunch of discussions about it and you have no excuse to continue spreading falsehoods about what he said.
I think there are reasonable things to disagree with Eliezer on and reasonable things to argue about his media presence, but please stop lying.
This is kind of the point where I despair about LessWrong and the rationalist community.
While I agree that he did not call for nuclear first strikes on AI centers, he said:
and
Asking us to be OK with provoking a nuclear second strike by attacking a nation that is not actually a signatory to an international agreement banning building gpu clusters that’s building a gpu cluster is actually still bad, and whether the nukes fly as part of the first strike or the retaliatory second strike seems like a weird thing to get hung up on. Picking this nit feels like a deflection because what Eliezer said in the TIME article is still entirely deranged and outside international norms.
And emotionally, I feel really, really uncomfortable. Like, sort of dread in stomach uncomfortable.
So I disagree with this, but, maybe want to step back a sec, because, like, yeah the situation is pretty scary. Whether you think AI extinction is imminent, or that Eliezer is catastrophizing and AI’s not really a big deal, or AI is a big deal but you think Eliezer’s writing is making things worse, like, any way you slice it something uncomfortable is going on.
I’m very much not asking you to be okay with provoking a nuclear second strike. Nuclear war is hella scary! If you don’t think AI is dangerous, or you don’t think a global moratorium is a good solution, then yeah, this totally makes sense to be scared by. And even if you think (as I do), that a global moratorium that is actually enforced is a good idea, the possible consequences are still really scary and not to be taken lightly.
I also didn’t particularly object to most of you earlier comments here (I think I disagree, but I think it’s a kinda reasonable take. Getting into that doesn’t seem like the point)
But I do think there are really important differences between regulating AIs the way we regulate nukes (which is what I think Eliezer is advocating), and proactively nuclear striking a country. They’re both extreme proposals, but I think it’s false to say Eliezer’s proposal is totally outside international norms. It doesn’t feel like a nitpick/hairsplit to ask someone to notice the difference between an international nuclear proliferation treaty (that other governments are pressured to sign), and a preemptive nuclear strike. The latter is orders of magnitude more alarming. (I claim this is a very reasonable analogy for what Eliezer is arguing)
That seems mostly like you don’t feel (at least on a gut level) that a rogue GPU cluster in an world where there’s an international coalition banning them is literally worse than a (say) 20% risk of a full nuclear exchange.
If instead, it was a rogue nation credibly building a nuclear weapon which would ignite the atmosphere according to our best physics, would you still feel like it was deranged to suggest that we should stop it from being built even at the risk of a conventional nuclear war? (And still only as a final resort, after all other options have been exhausted.)
I can certainly sympathize with the whole dread in the stomach thing about all of this, at least.