Note that in general, this sort of specification is uncomputable. To find the lexicographically first object which can’t be described in less than a million bits, we would have to make a list of all objects which can be described in less than a million bits (and return the first thing that’s not on the list). But we don’t know if our UTM will halt for any given string with less than a million bits, so we can never know if our list is complete.
Yes, and that’s sort of intentional. I was trying to come up with a mathematical model of an agent that can deal with uncomputable physics. The physics of our universe seems likely to be computable, but there is no a priori reason to assume that it must be. We may eventually discover a law of physics that’s not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can’t deal with these scenarios.
So I tried to find a “better”, i.e., more expressive, language for describing objects, but then realized that any fixed formal language has a similar problem. Here’s my current idea for solving this: make the language extensible instead of fixed. That is, define a base language, and a procedure for extending the language. Then, when the agent encounters some object that can’t be described concisely using his current language, he recursively extends it until a short description is possible. What the extension procedure should be is still unclear.
I agree that there are very interesting questions here. We have quite natural ways of describing uncomputable functions very far up the arithmetical hierarchy, and it seems that they can be described in some kind of recursive language even if the things they describe are not recursive (using recursive in the recursion theory sense both times). Turing tried something like this in Systems of Logic Based on Ordinals (Turing, 1939), but that was with formal logic and systems where you repeatedly add the Godel sentence of a system into the system as an axiom, repeating this into the transfinite. A similar thing could be done describing levels of computability transfinitely far up the arithmetical hierarchy using recursively represented ordinals to index them. However, then people like you and I will want to use certain well defined but non-recursive ordinals to do the indexing, and it seems to degenerate in the standard kind of way, just a lot further up the hierarchy than before.
And then there are objects that are completely outside the arithmetical hierarchy, but we probably shouldn’t assign zero priors to either. Things like large cardinals, perhaps.
Another mystery is, why did evolution create minds capable of thinking about these issues, given that agents equipped with a fixed UTM-based prior would have done perfectly fine in our place, at least up to now?
We aren’t really Bayesian reasoning machines at all, and it isn’t really accurate to speak of us having a prior. We choose a prior in order to use Bayesian reasoning to analyze a situation, and we seek to bend our natural reasoning to a Bayesian template in order to improve its accuracy, but we cannot wholly succeed in doing so. So the problem you raise should worry someone building AGI, but it’s not realistic to imagine a human agent becoming so Bayesian that they swallow the Solomonoff prior whole and are literally unable to contemplate super-Turing Universes.
I don’t think it’s unreasonable, therefore, to adopt the Solomonoff prior as a useful model to aid reasoning and discussion, and rely on our human ability to make and adopt a new, super-Turing model if some more general prior would have favoured it.
We may eventually discover a law of physics that’s not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can’t deal with these scenarios.
Then how can you deal with these scenarios? Did the idiot God make you better equipped for this task, Oh uncomputable ape-brain?
The idea of agents using UTM-based priors is a human invention, and therefore subject to human error. I’m not claiming to have an uncomputable brain, just that I’ve found such an error.
For a specific example of how human beings might deal with such scenarios, compared to agents using UTM-based priors, see “is induction unformalizable?”.
The model of environment values observations and behaviors, not statements about “uncomputability” and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label “halting oracle”, that’s your decision, and this is a decision you’d want any trusted AI to carry through as well.
I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I’m not competent to discuss domain-specific aspects of this question.
Did you understand why the problem is not serious? Your comment was nit-picking the example. Other similar examples make exactly the same point—but are not vulnerable to such nit-picking.
Note that in general, this sort of specification is uncomputable. To find the lexicographically first object which can’t be described in less than a million bits, we would have to make a list of all objects which can be described in less than a million bits (and return the first thing that’s not on the list). But we don’t know if our UTM will halt for any given string with less than a million bits, so we can never know if our list is complete.
Yes, and that’s sort of intentional. I was trying to come up with a mathematical model of an agent that can deal with uncomputable physics. The physics of our universe seems likely to be computable, but there is no a priori reason to assume that it must be. We may eventually discover a law of physics that’s not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can’t deal with these scenarios.
So I tried to find a “better”, i.e., more expressive, language for describing objects, but then realized that any fixed formal language has a similar problem. Here’s my current idea for solving this: make the language extensible instead of fixed. That is, define a base language, and a procedure for extending the language. Then, when the agent encounters some object that can’t be described concisely using his current language, he recursively extends it until a short description is possible. What the extension procedure should be is still unclear.
I agree that there are very interesting questions here. We have quite natural ways of describing uncomputable functions very far up the arithmetical hierarchy, and it seems that they can be described in some kind of recursive language even if the things they describe are not recursive (using recursive in the recursion theory sense both times). Turing tried something like this in Systems of Logic Based on Ordinals (Turing, 1939), but that was with formal logic and systems where you repeatedly add the Godel sentence of a system into the system as an axiom, repeating this into the transfinite. A similar thing could be done describing levels of computability transfinitely far up the arithmetical hierarchy using recursively represented ordinals to index them. However, then people like you and I will want to use certain well defined but non-recursive ordinals to do the indexing, and it seems to degenerate in the standard kind of way, just a lot further up the hierarchy than before.
And then there are objects that are completely outside the arithmetical hierarchy, but we probably shouldn’t assign zero priors to either. Things like large cardinals, perhaps.
Another mystery is, why did evolution create minds capable of thinking about these issues, given that agents equipped with a fixed UTM-based prior would have done perfectly fine in our place, at least up to now?
We aren’t really Bayesian reasoning machines at all, and it isn’t really accurate to speak of us having a prior. We choose a prior in order to use Bayesian reasoning to analyze a situation, and we seek to bend our natural reasoning to a Bayesian template in order to improve its accuracy, but we cannot wholly succeed in doing so. So the problem you raise should worry someone building AGI, but it’s not realistic to imagine a human agent becoming so Bayesian that they swallow the Solomonoff prior whole and are literally unable to contemplate super-Turing Universes.
I don’t think it’s unreasonable, therefore, to adopt the Solomonoff prior as a useful model to aid reasoning and discussion, and rely on our human ability to make and adopt a new, super-Turing model if some more general prior would have favoured it.
Then how can you deal with these scenarios? Did the idiot God make you better equipped for this task, Oh uncomputable ape-brain?
The idea of agents using UTM-based priors is a human invention, and therefore subject to human error. I’m not claiming to have an uncomputable brain, just that I’ve found such an error.
For a specific example of how human beings might deal with such scenarios, compared to agents using UTM-based priors, see “is induction unformalizable?”.
The model of environment values observations and behaviors, not statements about “uncomputability” and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label “halting oracle”, that’s your decision, and this is a decision you’d want any trusted AI to carry through as well.
I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I’m not competent to discuss domain-specific aspects of this question.
Not a serious problem—just consider objects which can’t be described in less than a million bits after a million timesteps instead.
If a problem is so serious that you have to change your prior, then I’d call that a serious problem.
Did you understand why the problem is not serious? Your comment was nit-picking the example. Other similar examples make exactly the same point—but are not vulnerable to such nit-picking.