Has anyone tried to actually DO Solomonoff induction against the real world? If I understand, it’s incomputable, and even the idea of encoding reality as a program… well, it would be a very big program. So except as a pointer to and clarification of Occam’s razor, does it have a real world use?
Generalizing: is it actually possible to use pure Bayesianism in any non-contrived, non-trivial context? And if purity can’t be attained, is there an optimal impure approximation?
is it actually possible to use pure Bayesianism in any non-contrived, non-trivial context?
Sure. The difficulty of actually doing Solomonoff induction is irrelevant, because SI isn’t actually part of Bayesianism as everyone in the world except Eliezer Yudowsky defines it. Cox’s Theorem gives us the basic laws of probability, but there is nothing comparable telling us that algorithmic probability is the correct prior we should be using. A prior is an encoding of one’s prior knowledge / state of information before seeing the experimental data, and we have no a priori reason to expect simple explanations for everything.
Cox’s Theorem gives us the basic laws of probability, but there is nothing comparable telling us that algorithmic probability is the correct prior we should be using. A prior is an encoding of one’s prior knowledge / state of information before seeing the experimental data, and we have no a priori reason to expect simple explanations for everything.
I seem to recall Eliezer writing a post on this—and did not seem to disagree with the above passage.
Has anyone tried to actually DO Solomonoff induction against the real world? If I understand, it’s incomputable, and even the idea of encoding reality as a program… well, it would be a very big program. So except as a pointer to and clarification of Occam’s razor, does it have a real world use?
Generalizing: is it actually possible to use pure Bayesianism in any non-contrived, non-trivial context? And if purity can’t be attained, is there an optimal impure approximation?
Sure. The difficulty of actually doing Solomonoff induction is irrelevant, because SI isn’t actually part of Bayesianism as everyone in the world except Eliezer Yudowsky defines it. Cox’s Theorem gives us the basic laws of probability, but there is nothing comparable telling us that algorithmic probability is the correct prior we should be using. A prior is an encoding of one’s prior knowledge / state of information before seeing the experimental data, and we have no a priori reason to expect simple explanations for everything.
But we could establish an a posteriori one
I seem to recall Eliezer writing a post on this—and did not seem to disagree with the above passage.