I agree that, overall, counting arguments are weak.
But even if you expect SGD to be used for TAI, generalisation is not a good counterexample, because maybe most counting arguments about SGd do work except for generalisation (which would not be surprising, because we selected SGD precisely because it generalises well).
If the ability of neural networks to generalise comes from volume/simplicity property and not optimiser properties, then why do different optimisers have different generalisation properties? E.g. Adam being better than SGD for transformers. (Or maybe I’m misremembering, and the reason that Adam outperforms SGD for transformers is mediated by Adam achieving better training loss and not Adam being better than SGD for a given training loss value.)