All of this implies that there already exists the blueprint of a Universal AI which will solve almost all problems almost as quickly as if it already knew the best (unknown) algorithm for solving them
Heh, this sounds like a common problem with computer-science speak, like the occasional misconception that all polynomial-time algorithms are “fast,” or specifically should always be used instead of exponential-time alternatives. Just because it’s a constant doesn’t mean it can’t be impractically large. If you want the optimal AI program that’s under a megabyte by brute force, don’t you have to start doing operations with numbers like 2^10^6? When your exponents start growing exponents, units don’t even matter any more, 2^10^6 can spare a few orders of magnitude and still be effectively 2^10^6, whether that’s operations or seconds or years or ages of the universe.
Heh, this sounds like a common problem with computer-science speak, like the occasional misconception that all polynomial-time algorithms are “fast,” or specifically should always be used instead of exponential-time alternatives. Just because it’s a constant doesn’t mean it can’t be impractically large. If you want the optimal AI program that’s under a megabyte by brute force, don’t you have to start doing operations with numbers like 2^10^6? When your exponents start growing exponents, units don’t even matter any more, 2^10^6 can spare a few orders of magnitude and still be effectively 2^10^6, whether that’s operations or seconds or years or ages of the universe.
Jurgen knows all this and often talks about it, I think he just likes to present his research in its sexiest light.