There are things Solomonoff Induction can’t understand which Solomonoff Induction Plus One can comprehend, but these things are not computable. In particular, if you have an agent with a hypercomputer that uses Solomoff Induction, no Solomonoff Inductor will be able to simulate the hypercomputer. AIXI is outside AIXI’s model space. You need a hypercomputer plus one to comprehend the halting behavior of hypercomputers.
But a Solomonoff Inductor can predict the behavior of a hypercomputer at least well as you can, because you’re computable, and so you’re inside the Solomonoff Inductor. Literally. It’s got an exact copy of you in there.
It seems easy to imagine a hypercomputation capable universe where computable beings evolve, ponder the laws of physics, and form an uncomputable theory of everything which they can’t use predictively in general case but which they can use to design a hypercomputer. S.I. might be able to do this but it is not obvious how (edit: i.e. even though AIXI includes those computable beings with their theory, they aren’t really capable of replicating existing data with their theory, and so are not part of S.I. sum, and so even though they can do better—by building a hypercomputer—AIXI won’t use them)
The issue is that besides the computable beings S.I. includes a lot of crud that is predicting past observations without containing a ‘compiler’ for some concept of ‘hey, the universe might be doing something uncomputable here, we better come up with an intricate experiment to test this because having a hypercomputer will be awesome’. The shorter crud influences actions the most. edit: also it is unclear how well can AIXI seek experimentation. We do experiments because we guess that with more knowledge we’ll have more power. It’s like self improvement, it requires some reflective thinking. For AIXI its like seeking futures where the predictor works bad.
Easier example: We can make a theory containing true real numbers, without penalizing theory for the code for discretization and for construction of a code that recreates past observations. We can assign some vague ‘simplicity’ values to theories that are not computable, namely a theory with true reals is simpler than discretization of this theory with a made-up discretization constant. We don’t rate the theories by size of their implementation by applied mathematicians. S.I. does rate by size of discrete, computable implementation. It’s no help if S.I. will contain somewhere neatly separable compiler plus our theory, if the weight for this is small.
There are things Solomonoff Induction can’t understand which Solomonoff Induction Plus One can comprehend, but these things are not computable. In particular, if you have an agent with a hypercomputer that uses Solomoff Induction, no Solomonoff Inductor will be able to simulate the hypercomputer. AIXI is outside AIXI’s model space. You need a hypercomputer plus one to comprehend the halting behavior of hypercomputers.
But a Solomonoff Inductor can predict the behavior of a hypercomputer at least well as you can, because you’re computable, and so you’re inside the Solomonoff Inductor. Literally. It’s got an exact copy of you in there.
It seems easy to imagine a hypercomputation capable universe where computable beings evolve, ponder the laws of physics, and form an uncomputable theory of everything which they can’t use predictively in general case but which they can use to design a hypercomputer. S.I. might be able to do this but it is not obvious how (edit: i.e. even though AIXI includes those computable beings with their theory, they aren’t really capable of replicating existing data with their theory, and so are not part of S.I. sum, and so even though they can do better—by building a hypercomputer—AIXI won’t use them)
It’s obvious to me; an S.I. user does it the same way the computable beings do.
The issue is that besides the computable beings S.I. includes a lot of crud that is predicting past observations without containing a ‘compiler’ for some concept of ‘hey, the universe might be doing something uncomputable here, we better come up with an intricate experiment to test this because having a hypercomputer will be awesome’. The shorter crud influences actions the most. edit: also it is unclear how well can AIXI seek experimentation. We do experiments because we guess that with more knowledge we’ll have more power. It’s like self improvement, it requires some reflective thinking. For AIXI its like seeking futures where the predictor works bad.
Easier example: We can make a theory containing true real numbers, without penalizing theory for the code for discretization and for construction of a code that recreates past observations. We can assign some vague ‘simplicity’ values to theories that are not computable, namely a theory with true reals is simpler than discretization of this theory with a made-up discretization constant. We don’t rate the theories by size of their implementation by applied mathematicians. S.I. does rate by size of discrete, computable implementation. It’s no help if S.I. will contain somewhere neatly separable compiler plus our theory, if the weight for this is small.