No, my problem with the hawks, as far as this criticism goes, is that they aren’t repeatedly and explicitly saying what they will do
One issue with “explicitly and repeatedly saying what they will do” is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin:
The example I usually give is “burn all GPUs”. This is not what I think you’d actually want to do with a powerful AGI—the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says “how dare you propose burning all GPUs?” I can say “Oh, well, I don’t actually advocate doing that; it’s just a mild overestimate for the rough power level of what you’d have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years.”
One issue with “explicitly and repeatedly saying what they will do” is that it invites competition. Many of the things that China hawks might want to do would be outside the Overton window. As Eliezer describes in AGI ruin: