“You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness.”
Really? Explain how? It seems like a general property of an intelligent system that it can’t know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel’s first incompleteness theorem) fairly directly. It might be possible to make a billion sequential upgrades with probabilistic guarantees of correctness, but only in a low entropy environment, and even then it’s dicey, and I have no idea how you’d prove it.
“You could build a modular, cleanly designed AI that could make a billion sequential upgrades to itself using deterministic guarantees of correctness.”
Really? Explain how? It seems like a general property of an intelligent system that it can’t know everything about how with would react to everything. That falls out of the halting theorem (and for that matter Godel’s first incompleteness theorem) fairly directly. It might be possible to make a billion sequential upgrades with probabilistic guarantees of correctness, but only in a low entropy environment, and even then it’s dicey, and I have no idea how you’d prove it.