The short version: AI that uses experimentation (as well as proof) to navigate through the space (or sub spaces) of turing machines in its internals.
Experimentation implies to me things like compartmentalization of parts of the AI in order to contain mistakes, potential conflict between compartments as they haven’t been proved to work well together. So vaguely brain-like.
We can already see fairly clearly how crippling a limitation that is. Ask a robot builder whether their software is “provably correct” and you will likely get laughed back into kindergarden.
By the way, what do you mean by “messy AI”?
The short version: AI that uses experimentation (as well as proof) to navigate through the space (or sub spaces) of turing machines in its internals.
Experimentation implies to me things like compartmentalization of parts of the AI in order to contain mistakes, potential conflict between compartments as they haven’t been proved to work well together. So vaguely brain-like.
I.e. provable correctness.
We can already see fairly clearly how crippling a limitation that is. Ask a robot builder whether their software is “provably correct” and you will likely get laughed back into kindergarden.