I imagine that the behavior of strong AI, even narrow AI, is computationally irreducible. In that case would it still be verifiable?
Our infrastructure should refuse to do anything the AI asks unless the AI itself provides a proof that it obeys the rules we have set. So we force the intelligent system itself to verify anything it generates!
I imagine that the behavior of strong AI, even narrow AI, is computationally irreducible. In that case would it still be verifiable?
Our infrastructure should refuse to do anything the AI asks unless the AI itself provides a proof that it obeys the rules we have set. So we force the intelligent system itself to verify anything it generates!