Hmm. Seems I’ve confused some stuff. What you’ve said is correct, but I think my point is still valid.
The Kolmogorov prior is uncomputable. The time required to approximate it by simulating Turing machines to size n (in other words, the naive brute force approach in question) grows at busy beaver speeds, because it requires simulating all non-halting machines within the relevant size, and there is no generalizable way to shortcut those simulations.
Now, there are ways to approximate Solomonoff / Kolmogorov / AIXI with sane computational limits. However, once you start doing that, you can no longer claim that you run the risk of “running most algorithms”, at least by the mathematical definition of “most” that I assumed you were using. Or rather, you will eventually, as you wait for infinite computation to be expended on the problem. But I’d say that either you’re being selective to a degree that the hazard lies in selecting for simulation, rather than in running “most” algorithms, or you’re in no more danger than in the brute force case. This is related to the point I made above: I think the accidentally dangerous portion of the search space is likely to lie in the algorithms whose runtime is long relative to their complexity, which are precisely the ones that will be avoided by approximations intended to be tractable.
Hmm. Seems I’ve confused some stuff. What you’ve said is correct, but I think my point is still valid.
The Kolmogorov prior is uncomputable. The time required to approximate it by simulating Turing machines to size n (in other words, the naive brute force approach in question) grows at busy beaver speeds, because it requires simulating all non-halting machines within the relevant size, and there is no generalizable way to shortcut those simulations.
Now, there are ways to approximate Solomonoff / Kolmogorov / AIXI with sane computational limits. However, once you start doing that, you can no longer claim that you run the risk of “running most algorithms”, at least by the mathematical definition of “most” that I assumed you were using. Or rather, you will eventually, as you wait for infinite computation to be expended on the problem. But I’d say that either you’re being selective to a degree that the hazard lies in selecting for simulation, rather than in running “most” algorithms, or you’re in no more danger than in the brute force case. This is related to the point I made above: I think the accidentally dangerous portion of the search space is likely to lie in the algorithms whose runtime is long relative to their complexity, which are precisely the ones that will be avoided by approximations intended to be tractable.