This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don’t understand how the AI reasons about itself you can’t control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don’t rely on types but these are presumably either equivalent to such types—and thus the result is helpful in that area too—or more general—in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.
This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don’t understand how the AI reasons about itself you can’t control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don’t rely on types but these are presumably either equivalent to such types—and thus the result is helpful in that area too—or more general—in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.