Then there is the fact that any algorithm that naively enumerates some space of algorithms qualifies in some sense as a FOOM seed as it will eventually hit on some recursively self-improving AI. But that could take gigayears so is really not FOOM in the usual sense.
If you link it up to actuators? That doesn’t work—it bashes its brains in before it does anything interesting. Unless you have mastered spaceships and self-replication—but then you have already built a S.I.S.
I think we need an inverse AI-box—which only lets AIs out. Something like “prove Fermat’s last theorem and I’ll let you out”. An objection would be that we’ll come across a non-AI that just happens to print out the proof before we come across an actual AI that does so, but actually the reverse should be true: an AI represents the intelligence to find that proof, which should be more compressible than a direct encoding of the entire the proof (even if we allow the proof itself to be compressed). But it could be that encoding intelligence just requires more bits than encoding the proof to Fermat’s last theorem, in which case we can just pick a more difficult problem, like “cure cancer in this faithful simulation of Earth”. As we increase the difficulty of the problem, the size of the smallest non-AI that solves it should increase quickly, but the size of the smallest true-AI that solves it should increase slowly.
Or perhaps the original AI box would actually function as an inverse AI box too: the human just tries to keep the AI in, so only a sufficiently intelligent AI can escape.
If you link it up to actuators? That doesn’t work—it bashes its brains in before it does anything interesting. Unless you have mastered spaceships and self-replication—but then you have already built a S.I.S.
Hmm good point.
I think we need an inverse AI-box—which only lets AIs out. Something like “prove Fermat’s last theorem and I’ll let you out”. An objection would be that we’ll come across a non-AI that just happens to print out the proof before we come across an actual AI that does so, but actually the reverse should be true: an AI represents the intelligence to find that proof, which should be more compressible than a direct encoding of the entire the proof (even if we allow the proof itself to be compressed). But it could be that encoding intelligence just requires more bits than encoding the proof to Fermat’s last theorem, in which case we can just pick a more difficult problem, like “cure cancer in this faithful simulation of Earth”. As we increase the difficulty of the problem, the size of the smallest non-AI that solves it should increase quickly, but the size of the smallest true-AI that solves it should increase slowly.
Or perhaps the original AI box would actually function as an inverse AI box too: the human just tries to keep the AI in, so only a sufficiently intelligent AI can escape.