Can I ask a stupid question? Could something very much like “burn all GPUs” be accomplished by using a few high-altitude nuclear explosions to create very powerful EMP blasts?
There is a lot of uncertainty over how effective EMP is at destroying electronics. The potential for destruction was great enough that for example during the Cold War, the defense establishment in the US bought laptops specially designed to resist EMPs, yes, but for all we know even that precaution was unnecessary.
And electronics not connected to long wires are almost certainly safe from EMP.
There is a lot of infrastructure that is inherently vulnerable to EMPs, though, such as power grid transformers, oil/gas pipelines, and even fiber optic cables (because they use repeaters). It might not fry the GPUs themselves, but it could leave you without power to run them, or an Internet connection to connect your programmers to your server farm.
About the usual example being “burn all GPUs”, I’m curious whether it’s to be understood as purely a stand-in term for the magnitude of the act, or whether it’s meant to plausibly be in solution-space.
An event of “burn all GPU” magnitude would have political ramifications. If you achieve this as a human organization with human means, i.e. without AGI cooperation, it seems violence on this scale would unite against you, resulting in a one-time delay.
If the idea is an act outside the Overton Window, without AGI cooperation, shouldn’t you aim to have the general public and policymakers united against AGI, instead of against you? Given that semi manufacturing capabilities required to make GPU or TPU-like chips are highly centralized, there being only three to four relevant fabs left, restricting AI hardware access may not be enough to stop bad incentives indefinitely for large actors, but it seems likely to gain more time than a single “burn all GPUs” event.
For instance, killing a {thousand, fifty-thousand, million} people in a freak bio-accident seems easier than solving alignment. If you pushed a weak AI into the trap and framed it for falling into it, would that gain more time through policymaking than destroying GPUs directly (still assuming a pre-AGI world)?
Can I ask a stupid question? Could something very much like “burn all GPUs” be accomplished by using a few high-altitude nuclear explosions to create very powerful EMP blasts?
There is a lot of uncertainty over how effective EMP is at destroying electronics. The potential for destruction was great enough that for example during the Cold War, the defense establishment in the US bought laptops specially designed to resist EMPs, yes, but for all we know even that precaution was unnecessary.
And electronics not connected to long wires are almost certainly safe from EMP.
There is a lot of infrastructure that is inherently vulnerable to EMPs, though, such as power grid transformers, oil/gas pipelines, and even fiber optic cables (because they use repeaters). It might not fry the GPUs themselves, but it could leave you without power to run them, or an Internet connection to connect your programmers to your server farm.
About the usual example being “burn all GPUs”, I’m curious whether it’s to be understood as purely a stand-in term for the magnitude of the act, or whether it’s meant to plausibly be in solution-space.
An event of “burn all GPU” magnitude would have political ramifications. If you achieve this as a human organization with human means, i.e. without AGI cooperation, it seems violence on this scale would unite against you, resulting in a one-time delay.
If the idea is an act outside the Overton Window, without AGI cooperation, shouldn’t you aim to have the general public and policymakers united against AGI, instead of against you?
Given that semi manufacturing capabilities required to make GPU or TPU-like chips are highly centralized, there being only three to four relevant fabs left, restricting AI hardware access may not be enough to stop bad incentives indefinitely for large actors, but it seems likely to gain more time than a single “burn all GPUs” event.
For instance, killing a {thousand, fifty-thousand, million} people in a freak bio-accident seems easier than solving alignment. If you pushed a weak AI into the trap and framed it for falling into it, would that gain more time through policymaking than destroying GPUs directly (still assuming a pre-AGI world)?