If AI ends up intelligent enough and with enough manufacturing capability to threaten nuclear deterrence; I’d expect it to also deduce any conclusions I would.
So it seems mostly a question of what the world would do with those conclusions earlier, rather than not at all.
A key exception is if later AGI would be blocked on certain kinds of manufacturing to create it’s destabilizing tech, and if drawing attention to that earlier starts serially blocking work earlier.
If AI ends up intelligent enough and with enough manufacturing capability to threaten nuclear deterrence; I’d expect it to also deduce any conclusions I would.
So it seems mostly a question of what the world would do with those conclusions earlier, rather than not at all.
A key exception is if later AGI would be blocked on certain kinds of manufacturing to create it’s destabilizing tech, and if drawing attention to that earlier starts serially blocking work earlier.