This is only meaningful under the assumption that the intelligence of an AI depends on the strength of its proof system.
Edit:
The intelligence of the AI? The proof system is necessary to provably keep the AI’s goals invariant. Its epistemic prowess (“intelligence”) need not be dependent on the proof system. The AI could use much weaker proof systems—or even just probabilistic tests such as “this will probably make me more powerful” for most of its self-modifying purposes, just as you don’t have a proof system that reading a certain book will increase your intelligence.
However, if we want to keep crucial properties such as the utility function as provable invariants, that’s what we’d need such a strong proof system for, by definition.
A pity that you cannot be more eloquent, or produce any argument to support your claim that “No”.
I have done both, in published papers (cf Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel and P. Wang (Eds.) Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam, and Loosemore, R.P.W. (2011b). The Complex Cognitive Systems Manifesto. In The Yearbook of Nanotechnology, Volume III: Nanotechnology, the Brain, and the Future, Eds. Sean Hays, Jason Scott Robert, Clark A. Miller, and Ira Bennett. New York, NY: Springer.
But don’t mind me. A voice of sanity can hardly be expected to be listened to under these circumstances.
Edit:
The intelligence of the AI? The proof system is necessary to provably keep the AI’s goals invariant. Its epistemic prowess (“intelligence”) need not be dependent on the proof system. The AI could use much weaker proof systems—or even just probabilistic tests such as “this will probably make me more powerful” for most of its self-modifying purposes, just as you don’t have a proof system that reading a certain book will increase your intelligence.
However, if we want to keep crucial properties such as the utility function as provable invariants, that’s what we’d need such a strong proof system for, by definition.
A pity that you cannot be more eloquent, or produce any argument to support your claim that “No”.
I have done both, in published papers (cf Loosemore, R.P.W. (2007). Complex Systems, Artificial Intelligence and Theoretical Psychology. In B. Goertzel and P. Wang (Eds.) Proceedings of the 2006 AGI Workshop. IOS Press, Amsterdam, and Loosemore, R.P.W. (2011b). The Complex Cognitive Systems Manifesto. In The Yearbook of Nanotechnology, Volume III: Nanotechnology, the Brain, and the Future, Eds. Sean Hays, Jason Scott Robert, Clark A. Miller, and Ira Bennett. New York, NY: Springer.
But don’t mind me. A voice of sanity can hardly be expected to be listened to under these circumstances.