A lot of it depends on what sort of system an AI implementation ends up having to be like. These are examples only of things you might need to do.
1) Prove that the messy AI do not have the same sort of security holes that modern computers do. A true AI botnet would be a very scary thing.
2) If has a component that evolves, prove that the it is an evolutionarily stable strategy for whatever is evolving to optimize what you want optimized.
There might also be work done to try and computationally categorize how real neural nets are different from computers. And human from other animal brains. Anything to help us push back the “Here be dragons signs” on portions of Messy AI so we know what we can and can’t use. And if we can’t figure out how to use human level bits safely, don’t use them until we have no other choice.
There are also questions of how best to deploy AI (close sourced/design giving you more control, open source more minds to check the code).
If this is a bit jumbled it is because Messy AI has huge numbers of possibilities and we are pretty ignorant about it really.
The short version: AI that uses experimentation (as well as proof) to navigate through the space (or sub spaces) of turing machines in its internals.
Experimentation implies to me things like compartmentalization of parts of the AI in order to contain mistakes, potential conflict between compartments as they haven’t been proved to work well together. So vaguely brain-like.
We can already see fairly clearly how crippling a limitation that is. Ask a robot builder whether their software is “provably correct” and you will likely get laughed back into kindergarden.
What would you do?
A lot of it depends on what sort of system an AI implementation ends up having to be like. These are examples only of things you might need to do.
1) Prove that the messy AI do not have the same sort of security holes that modern computers do. A true AI botnet would be a very scary thing.
2) If has a component that evolves, prove that the it is an evolutionarily stable strategy for whatever is evolving to optimize what you want optimized.
There might also be work done to try and computationally categorize how real neural nets are different from computers. And human from other animal brains. Anything to help us push back the “Here be dragons signs” on portions of Messy AI so we know what we can and can’t use. And if we can’t figure out how to use human level bits safely, don’t use them until we have no other choice.
There are also questions of how best to deploy AI (close sourced/design giving you more control, open source more minds to check the code).
If this is a bit jumbled it is because Messy AI has huge numbers of possibilities and we are pretty ignorant about it really.
By the way, what do you mean by “messy AI”?
The short version: AI that uses experimentation (as well as proof) to navigate through the space (or sub spaces) of turing machines in its internals.
Experimentation implies to me things like compartmentalization of parts of the AI in order to contain mistakes, potential conflict between compartments as they haven’t been proved to work well together. So vaguely brain-like.
I.e. provable correctness.
We can already see fairly clearly how crippling a limitation that is. Ask a robot builder whether their software is “provably correct” and you will likely get laughed back into kindergarden.