The busy beaver function is the lower bound on how fast a function has to grow before it counts as “uncomputable”. When people say that the Solomonoff induction or Kolmogorov complexity is uncomputable, that’s what they mean. When you say you’re having trouble with an order of magnitude estimate, I have to wonder whether you attempted to come up with an order of magnitude estimate for the exponent. For example, I’m pretty sure that no conceivable amount of computronium will get to 10^10000 territory, and that I can safely ignore ethical problems that only arise once we’re in that territory. I’m rather skeptical of the idea that we could even get to 10^1000 territory by brute force, even if we turned the galaxy into computronium.
Therefore, when you say you’re concerned about computing Solmonoff induction or Kolmogorov priors, I’m left wondering whether you’re worried about 5- or 6-state machines, because there is no conceivable way the program would ever get to the 7-state machines.
And you also can’t simply dismiss the longest-running small Turing machines, in my opinion. If there exists an accidental computational hazard worth worrying about, I would assume it’s in the form of something analogous to “run Conway’s life until the board is empty”—in other words, precisely a small Turing machine that is likely to run a very long time before halting, and whose halting status is difficult to determine.
I wasn’t really trying to add original material to the discussion. This post was supposed to be a summary of ideas already discovered on LW, and I wanted to know if I was missing any, or if this was a good summary of the ideas.
I think the summary is overly confused. If the accidental hazards are of purely philosophical interest, they shouldn’t occupy such a large fraction of the post, and shouldn’t be the first thing discussed. I almost didn’t bother reading the rest.
I think the non-simulation hazards have little in common with the simulation hazards from an ethics standpoint, and less from a practical FAI programming view. If you’re having trouble separating them, then I would take that as a strong clue that you’ve chosen the wrong points to carve at.
Thanks for all your feedback!
You’re welcome! There’s some interesting stuff here, though I’m skeptical that there’s much of interest in anything except the intentional, directed simulations questions.
The busy beaver function is the lower bound on how fast a function has to grow before it counts as “uncomputable”.
Ah, I see. That’s not the definition, but it is a fact about it. Although I think you might be confused about the definition of “uncomputable”. It doesn’t have to do with functions growing. It’s just a separate, awesome fact that all computable functions grow slower than the (uncomputable) busy beaver function. There are many uncomputable functions that grow slower than computable functions.
Hmm. Seems I’ve confused some stuff. What you’ve said is correct, but I think my point is still valid.
The Kolmogorov prior is uncomputable. The time required to approximate it by simulating Turing machines to size n (in other words, the naive brute force approach in question) grows at busy beaver speeds, because it requires simulating all non-halting machines within the relevant size, and there is no generalizable way to shortcut those simulations.
Now, there are ways to approximate Solomonoff / Kolmogorov / AIXI with sane computational limits. However, once you start doing that, you can no longer claim that you run the risk of “running most algorithms”, at least by the mathematical definition of “most” that I assumed you were using. Or rather, you will eventually, as you wait for infinite computation to be expended on the problem. But I’d say that either you’re being selective to a degree that the hazard lies in selecting for simulation, rather than in running “most” algorithms, or you’re in no more danger than in the brute force case. This is related to the point I made above: I think the accidentally dangerous portion of the search space is likely to lie in the algorithms whose runtime is long relative to their complexity, which are precisely the ones that will be avoided by approximations intended to be tractable.
The busy beaver function is the lower bound on how fast a function has to grow before it counts as “uncomputable”. When people say that the Solomonoff induction or Kolmogorov complexity is uncomputable, that’s what they mean. When you say you’re having trouble with an order of magnitude estimate, I have to wonder whether you attempted to come up with an order of magnitude estimate for the exponent. For example, I’m pretty sure that no conceivable amount of computronium will get to 10^10000 territory, and that I can safely ignore ethical problems that only arise once we’re in that territory. I’m rather skeptical of the idea that we could even get to 10^1000 territory by brute force, even if we turned the galaxy into computronium.
Therefore, when you say you’re concerned about computing Solmonoff induction or Kolmogorov priors, I’m left wondering whether you’re worried about 5- or 6-state machines, because there is no conceivable way the program would ever get to the 7-state machines.
And you also can’t simply dismiss the longest-running small Turing machines, in my opinion. If there exists an accidental computational hazard worth worrying about, I would assume it’s in the form of something analogous to “run Conway’s life until the board is empty”—in other words, precisely a small Turing machine that is likely to run a very long time before halting, and whose halting status is difficult to determine.
I think the summary is overly confused. If the accidental hazards are of purely philosophical interest, they shouldn’t occupy such a large fraction of the post, and shouldn’t be the first thing discussed. I almost didn’t bother reading the rest.
I think the non-simulation hazards have little in common with the simulation hazards from an ethics standpoint, and less from a practical FAI programming view. If you’re having trouble separating them, then I would take that as a strong clue that you’ve chosen the wrong points to carve at.
You’re welcome! There’s some interesting stuff here, though I’m skeptical that there’s much of interest in anything except the intentional, directed simulations questions.
Ah, I see. That’s not the definition, but it is a fact about it. Although I think you might be confused about the definition of “uncomputable”. It doesn’t have to do with functions growing. It’s just a separate, awesome fact that all computable functions grow slower than the (uncomputable) busy beaver function. There are many uncomputable functions that grow slower than computable functions.
Hmm. Seems I’ve confused some stuff. What you’ve said is correct, but I think my point is still valid.
The Kolmogorov prior is uncomputable. The time required to approximate it by simulating Turing machines to size n (in other words, the naive brute force approach in question) grows at busy beaver speeds, because it requires simulating all non-halting machines within the relevant size, and there is no generalizable way to shortcut those simulations.
Now, there are ways to approximate Solomonoff / Kolmogorov / AIXI with sane computational limits. However, once you start doing that, you can no longer claim that you run the risk of “running most algorithms”, at least by the mathematical definition of “most” that I assumed you were using. Or rather, you will eventually, as you wait for infinite computation to be expended on the problem. But I’d say that either you’re being selective to a degree that the hazard lies in selecting for simulation, rather than in running “most” algorithms, or you’re in no more danger than in the brute force case. This is related to the point I made above: I think the accidentally dangerous portion of the search space is likely to lie in the algorithms whose runtime is long relative to their complexity, which are precisely the ones that will be avoided by approximations intended to be tractable.