In practice the obvious way to construct an AI stack allows the AI itself to just be a product of another optimization engine. The model and it’s architecture are generated from a higher level system that allocates a design to satisfy some higher level constraint. So the AI itself is pretty mutable, it can’t add capabilities without limit because the capabilities must be in the pursuit of the system’s design heuristic written by humans, but humans aren’t needed to add capabilities.
In practice from a human perspective, a system that has some complex internal state variables can fail in many many ways. It’s quite unreliable inherently. It’s why in your own life you have seen many system failures—they almost all failed from complex internal state. It’s why your router fails, why a laptop fails, a game console fails to update, a car infotainment system fails to update, a system at the DMV or a school or hospital goes down, and so on.
Couple of comments here.
In practice the obvious way to construct an AI stack allows the AI itself to just be a product of another optimization engine. The model and it’s architecture are generated from a higher level system that allocates a design to satisfy some higher level constraint. So the AI itself is pretty mutable, it can’t add capabilities without limit because the capabilities must be in the pursuit of the system’s design heuristic written by humans, but humans aren’t needed to add capabilities.
In practice from a human perspective, a system that has some complex internal state variables can fail in many many ways. It’s quite unreliable inherently. It’s why in your own life you have seen many system failures—they almost all failed from complex internal state. It’s why your router fails, why a laptop fails, a game console fails to update, a car infotainment system fails to update, a system at the DMV or a school or hospital goes down, and so on.