It suggests putting more weight on a plan to get AI Research globally banned. I am skeptical that this will work (though if burning all GPUs would be a pivotal act the chances of success are significantly higher), but it seems very unlikely that there is a technical solution either.
In addition, at least some purported technical solutions to AI risk seem to meaningfully increase the risk to humanity. If you have someone creating an AGI to exercise sufficient control over the world to execute a pivotal act, that raises the stakes of being first enormously which incentivizes cutting corners. And, it also makes it more likely that the AGI will destroy humanity and be quicker to do so.
It suggests putting more weight on a plan to get AI Research globally banned. I am skeptical that this will work (though if burning all GPUs would be a pivotal act the chances of success are significantly higher), but it seems very unlikely that there is a technical solution either.
In addition, at least some purported technical solutions to AI risk seem to meaningfully increase the risk to humanity. If you have someone creating an AGI to exercise sufficient control over the world to execute a pivotal act, that raises the stakes of being first enormously which incentivizes cutting corners. And, it also makes it more likely that the AGI will destroy humanity and be quicker to do so.