I think the orthodox MIRI position is not that logical proofs are necessary, or even the most efficient way, to make a super-intelligence. It’s that humans need formal proofs to be sure that the AI will be well-behaved. A random kludgy program might be much smarter than your carefully proven one, but that’s cold comfort if it then proceeds to kill you.
Yeah.
I think the orthodox MIRI position is not that logical proofs are necessary, or even the most efficient way, to make a super-intelligence. It’s that humans need formal proofs to be sure that the AI will be well-behaved. A random kludgy program might be much smarter than your carefully proven one, but that’s cold comfort if it then proceeds to kill you.