An AIXI is an algorithm that tries to predict sequences of input data using programmes. That’s it. It doesnt want or need.
AIXI is the version that optimizes over the programs to maximize reward.
Solomonov induction is the version that “just” produces predictions.
I would say something is optimizing if it is a computation and the simplest explanation of its behavior looks causally forward from it’s outputs.
If the explanation “Y is whatever it takes to make Z as big as possible” is simpler than ” Y is f(X)” then the computer is an optimizer. Of course, a toaster that doesn’t contain any chips isn’t even a computer. For example “deep blue is taking whichever moves lead to a win” is a simpler explanation than a full code listing of deep blue.
I would say something is an optimizer if you have more knowledge of Z than Y. Of course, this is “subjective” in that it is relitive to your knowledge. But if you have such deep knowledge of the computation that you know all about Y, then you can skip the abstract concept of “optimizer” and just work out what will happen directly.
Ok. Are you suggesting a better technical definition, or are you suggesting going back to the subjective approach?
No it doesn’t.
It doesn’t have one. An AIXI isn’t an learning system, and doesn’t have a reward function.
An AIXI is an algorithm that tries to predict sequences of input data using programmes. That’s it. It doesnt want or need.
No, it will sit there figuring out the shortest code sequence that predicts its input. The only thing it can do.
What external behaviour?
AIXI is the version that optimizes over the programs to maximize reward.
Solomonov induction is the version that “just” produces predictions.
I would say something is optimizing if it is a computation and the simplest explanation of its behavior looks causally forward from it’s outputs.
If the explanation “Y is whatever it takes to make Z as big as possible” is simpler than ” Y is f(X)” then the computer is an optimizer. Of course, a toaster that doesn’t contain any chips isn’t even a computer. For example “deep blue is taking whichever moves lead to a win” is a simpler explanation than a full code listing of deep blue.
I would say something is an optimizer if you have more knowledge of Z than Y. Of course, this is “subjective” in that it is relitive to your knowledge. But if you have such deep knowledge of the computation that you know all about Y, then you can skip the abstract concept of “optimizer” and just work out what will happen directly.