Unlike Word, the human genome is self-hosting. That means that it is paying fair and square for any complexity advantage it might have—if Microsoft found that the x86 was not expressive enough to code in a space-efficient manner, they could likewise implement more complex machinery to host it.
Of course, the core fact is that the DNA of eukaryotes looks memory efficient compared to the bloat of word.
There was a time when Word was shipped on floppy disks. From what I recall, it came on multiple floppies, but on the order of ten, not a thousand. With these modern CD-ROMs and DVDs, there is simply less incentive to optimize for size. People are not going to switch away from word to libreoffice if the latter was only a gigabyte.
The reason why MAD works (sometimes) is because
ICBM launches are easy to detect
It is hard to wipe out the second-strike capability of an enemy before they know what is going on.
By contrast, there is no fire alarm for ASI. Nobody knows how many nodes an neural net needs to start a self-improvement cascade which will end in the singularity, or if such a thing is even possible. Also, nobody knows if an ASI can jump by a few 100 IQ points just through algorithmic gains or if it will be required to design new chip fabs first.
--
Some more nitpicks about specifics:
To protect against ‘cyber’ attacks, the obvious defense is to air-gap your cluster. Granted, there have been attacks on air-gapped systems such as Stuxnet. But that took years of careful planning and an offensive budget which was likely a few OOM higher than what the Iranians were spending on IT security, and it worked exactly once.
Geolocation in chips: Data centers generally have poor GPS reception. You could add circuitry which requires the chip to connect to its manufacturer and measure the network delay, though. I will note that DRM, TPM, security enclaves and the like have been a wet dream of the content industry for as long as I have been alive, and that their success is debatable—more often than not, they are cracked rather sooner than later. If your adversary can open the chips and modify the circuitry (at a larger feature size—otherwise they would build their own AI chips), protecting against all possible attacks seems hard. Also, individual chips likely do not have the big picture context of what they are working on, e.g. if they are training a large LLM or a small one. To extend the nuclear weapon analogy: Sure, give your nukes anti-tampering devices, but the primary security should be that it is really hard to steal your nukes, not that removing the anti-tampering device is impossible.
Drones and pre-existing military: An AI which works by running more effective agents in human-designed drones is not an ASI. An ASI would do something much more grand and deadly—grey goo, engineered plagues, hacking the enemies ICBMs or something else, perhaps something which no human has even thought of yet. Pretending that our current weapon systems will be anything more than sticks and stones in the face of ASI is pure pandering to the military industrial complex.
Likewise, I do not think it is wise to “carefully integrate AI into military command and control”. As much as I distrust human decision making with regard to ICBMs, I will take them over ChatGPT any day of the week.
--
If we end up with MAIM, here is how I think it might work:
Establishment of an International AI Agency in analogy to the IAEA.
The IAIA will inspect the whole production chain—from EUV machines, state-of-the-art chip fabs to data centers which run AI-capable chips worldwide. It will limit the cluster sizes for AI training and/or control that the workloads are below a certain size.
Any defection from the IAIA inspections is seen as an attempt to develop ASI. While withdrawal from the NPT merely results in diplomatic interventions and sanctions, an attempt at developing ASI is not seen as an illicit nuclear weapons program, but like a missile launch—it will immediately trigger a nuclear attack.
Of course, there are some problems with that approach:
Obviously every superpower would have incentives to develop ASI in secret.
A nation might decide that being the first to develop ASI is well worth getting nuked over.
The IAIA has no idea at what model size the runaway to ASI starts.
Politicians might be reluctant to end the world as we know it over a “mere” treaty withdrawal in a world where we do not even know that AI is feasible.